Master Kubernetes

Unlock the full power of Kubernetes — the backbone of modern DevOps. Learn how to manage pods, labels, secrets, ConfigMaps, headless services and affinities to run scalable, secure, and automated workloads.

Whether you’re deploying microservices or managing complex clusters, this guide helps you go from setup to production-ready operations.

What is Kubernetes?

Kubernetes (k8s) is an open-source platform used to automate the deployment, scaling, and management of containerized applications. It groups containers into logical units called pods and runs them across a cluster of machines.

In a practical scenario, imagine you have multiple microservices running for an eCommerce app—like authentication, orders, and payments. Instead of manually running and monitoring these containers, Kubernetes automatically schedules them on available nodes, restarts failed ones, and scales them based on traffic.

In short, Kubernetes helps DevOps teams run applications reliably, efficiently, and consistently across any environment — from on-prem servers to public clouds.

How Kubernetes Work?

Kubernetes works by managing a group of machines — called a cluster — that run containerized applications. A cluster has two main parts:

  • Control Plane – Makes global decisions like scheduling, scaling, and maintaining the desired state of the system.
  • Worker Nodes – Run the actual application workloads inside containers.

When you deploy an app, you describe its configuration in a YAML file — including container images, replicas, and resource limits. Kubernetes reads this configuration and ensures your app runs exactly as defined.

Here’s what happens behind the scenes:

  1. The API Server receives your deployment request.
  2. The Scheduler finds the best node to run each pod.
  3. The Kubelet on that node starts the containers using the specified image.
  4. The Controller Manager keeps checking if the actual state matches the desired state — if a pod crashes, it recreates it automatically.

In essence, Kubernetes continuously observes, corrects, and optimizes workloads to keep applications running reliably.


Kubernetes Secrets

A Secret in Kubernetes is used to securely store sensitive data like passwords, tokens, or API keys — keeping them out of your container images and configuration files. It helps DevOps teams protect credentials while still making them accessible to pods that need them at runtime.

To see how to create, mount, and manage Secrets securely in real-world clusters, check out the full guide on Kubernetes Secrets.


Kubernetes Configmaps

A ConfigMap in Kubernetes is used to store non-sensitive configuration data — such as environment variables, file paths, or service URLs — separate from your container images. This makes your applications more flexible and easier to update without rebuilding images.

ConfigMaps let DevOps teams manage application settings dynamically across environments.

For a complete walkthrough on creating, using, and injecting ConfigMaps into pods, see the detailed guide on Kubernetes ConfigMaps.


Kubernetes Container Images

In Kubernetes, container images are the core of every deployment — they package your application code, dependencies, and environment into a portable unit. When you define a pod, Kubernetes pulls the image from a registry such as Docker Hub, ECR, or GCR and runs it on an appropriate node.

DevOps teams use clear image naming conventions to manage multiple environments, track versions, and automate updates safely. K8s also supports smooth image rollouts with zero downtime using rolling updates.

When dealing with private registries, Kubernetes handles authentication through imagePullSecrets, ensuring secure and authorized image access.

For a complete guide on naming, updating, and securing images in Kubernetes, visit the detailed article on Kubernetes Images.


Kubernetes Labels

Labels in Kubernetes are key-value pairs attached to objects like pods, nodes, and services. They help organize and identify resources based on attributes such as environment, version, or app name.

DevOps teams use labels to efficiently group, filter, and select resources during deployments or updates. For example, you can target only pods with env=production when applying configuration changes or rolling updates.

Labels also play a critical role in node and pod affinity, allowing workloads to be scheduled on specific nodes with matching labels.

For a deeper explanation of how to create, use, and manage labels effectively in real-world Kubernetes environments, explore the complete guide on Kubernetes Labels.


Kubernetes Node & Pod Affinity

Affinity in Kubernetes defines rules that control where pods should run within a cluster, based on node or pod labels. It helps DevOps teams optimize workload placement for performance, reliability, and resource utilization.

  • Node Affinity — Lets you schedule pods on specific nodes that match defined labels. For example, you might run GPU-intensive workloads only on nodes labeled hardware=gpu.
  • Pod Affinity — Ensures certain pods run together on the same node (useful for apps that benefit from low latency, like frontend-backend pairs).
  • Pod Anti-Affinity — Ensures pods run on different nodes, improving availability and fault tolerance for replicated applications.

Affinity rules are defined in the pod specification using requiredDuringSchedulingIgnoredDuringExecution or preferredDuringSchedulingIgnoredDuringExecution.

For a detailed explanation with YAML examples and DevOps use cases, see the full guide on Kubernetes Node and Pod Affinity.


Kubernetes Autoscaling

Kubernetes autoscaling ensures your application stays responsive while avoiding unnecessary resource usage.

In Kubernetes, scaling happens in two primary ways: Horizontal Scaling, which adds or removes Pods, and Vertical Scaling, which adjusts the CPU and memory resources assigned to each Pod.

Horizontal Pod Autoscaling (HPA)

The Horizontal Pod Autoscaler automatically adjusts the number of Pods in a workload based on CPU, memory, or custom metrics, ensuring applications scale efficiently under changing load.

Read the full guide: Kubernetes Horizontal Pod Autoscaling (HPA)


Kubernetes Headless Services

For applications that require direct access to individual pods instead of a single load-balanced IP, Kubernetes offers Headless Services. These services expose each pod via DNS, making them perfect for StatefulSets, databases, and peer-to-peer systems.
Explore our detailed guide on Headless Services in Kubernetes to learn how to create, access, and debug headless services and see DNS-based service discovery in action.


Kubernetes Service Mesh

Managing traffic between microservices in a Kubernetes cluster can be complex, especially when scaling applications or monitoring performance.

The Kubernetes Service Mesh concepts guide explains how a service mesh solves these real-world problems by simplifying traffic control, observability, and reliability for containerized applications.


Integrations

Looking for practical examples of deploying apps on Kubernetes?

Deploying Jellyfin in Kubernetes:

Running a media server like Jellyfin on your own can be tricky — managing pods, persistent storage, and access can quickly become overwhelming.

Our step-by-step Jellyfin Kubernetes tutorial addresses these challenges by showing how to deploy, configure, and run Jellyfin smoothly in a Kubernetes cluster, including persistent volumes, service exposure, and optional GPU acceleration.

Some more Integrations:


K8s Best Practices for DevOps Teams

Running Kubernetes efficiently requires more than just knowing its components — it’s about managing them securely, reliably, and at scale. Here are some proven best practices followed by DevOps teams in production environments:

  • Use Secrets to store confidential information like API keys and passwords instead of hardcoding them in manifests or images. Apply RBAC policies to restrict who can view or modify them.
  • Keep configuration separate from code using ConfigMaps. This makes your deployments flexible and environment-agnostic — so you can promote workloads from staging to production without image rebuilds.
  • Always use tagged and versioned images (app:v1.2) instead of latest.
  • Use consistent and descriptive labels for all resources — e.g., app, env, tier, version. It simplifies filtering, monitoring, and affinity-based scheduling.
  • Integrate tools like Prometheus, Grafana, and ELK Stack for cluster-wide visibility. Monitoring ensures quick detection of performance issues and helps with capacity planning.
  • Regularly clean up unused resources — old deployments, dangling images, and orphaned ConfigMaps or Secrets — to optimize cluster performance and reduce costs.

By following these best practices, DevOps teams can build secure, scalable, and resilient Kubernetes environments that support continuous delivery and real-world production reliability.


Now that you understand how core Kubernetes components like Secrets, ConfigMaps, Images, Labels, and Affinity work together, you’re ready to manage, scale, and automate real-world containerized applications with confidence.

Kubernetes isn’t just about orchestration — it’s about building a system that’s secure, resilient, and adaptable to change. With these fundamentals in place, you can start deploying production-grade workloads, optimizing performance, and unlocking the full power of cloud-native DevOps.


Useful Sites:

Author

Sharukhan is the founder of Tecktol. He has worked as a software engineer specializing in full-stack web development.