Interactive Course

Kubernetes Unlocked

From zero to understanding Kubernetes architecture, pods, deployments, services, and more. Learn by exploring animated diagrams, real YAML files, and hands-on examples.

Architecture Pods ReplicaSets Deployments Services Hands-On Lab
Scroll down to begin
01

Kubernetes Architecture

The control tower and the runways -- how a Kubernetes cluster is organized

What Is Kubernetes?

Imagine an air traffic control tower at a busy airport. Planes (your applications) need runways, gates, and fuel. The tower doesn't fly the planes -- it directs them, makes sure no two crash, and reroutes when things go wrong.

Kubernetes is that control tower for your applications. You tell it what you want running, and it figures out where and how.

Node

A machine (physical or virtual) where Kubernetes runs your apps. Like a runway at the airport.

Cluster

A group of nodes working together. If one runway closes, the others keep operating.

Master Node

The control tower itself. It manages the cluster, watches for failures, and decides where apps run.

💡
Core Idea: Desired State

You declare what you want (e.g., "run 3 copies of my app"). Kubernetes constantly works to make reality match your declaration. If a copy crashes, it creates a new one automatically.

The Full Picture

A cluster has two kinds of nodes. Click any component to learn what it does.

Master Node (Control Plane)

🚪 API Server
🗃 etcd
🔄 Controller Manager
📋 Scheduler

Worker Nodes (Data Plane)

🤖 Kubelet
📦 Container Runtime
🌐 Kube Proxy
Click any component above to learn what it does

How It All Comes Together

Watch the components work together to deploy a new pod.

Cluster Group Chat
0 / 6 messages

Check Your Understanding

Scenario You deployed an app, but one of the 3 pods crashed. Nobody did anything, but a minute later it's back to 3 pods.

Which component detected the problem and fixed it?

If etcd were to lose all its data, what would happen?

02

Pods

The smallest building block -- your app's protective wrapper

Why Pods, Not Just Containers?

When you ship a fragile vase, you don't toss it in a truck bare. You wrap it in bubble wrap, put it in a box, and label it. The vase is your container, and the box with the label is your Pod.

Kubernetes never works with containers directly. Every container runs inside a Pod. A Pod is a single instance of your application -- the smallest object you can create in Kubernetes.

💡
One Pod = One App Instance

Pods and containers have a 1:1 relationship for your main application. Need more capacity? Create more pods, not more containers inside the same pod.

How Pods Scale

When traffic grows, Kubernetes adds more pods -- never more containers inside one pod. When a node runs out of space, new pods go to other nodes.

Node 1
Pod
Node 2
1 pod running on Node 1. Click "Scale Up" to add more.
1 / 6 pods

Multi-Container Pods

Sometimes a main container needs a helper -- a sidecar. They share the same network (can talk via localhost), the same storage, and the same lifecycle -- if the pod dies, both die.

📦
Main Container

Your application -- the actual workload (e.g., a web server)

📋
Sidecar Container

A helper -- collects logs, handles proxying, or fetches config updates

📚
Why Not Plain Docker?

With Docker alone, you'd manually manage networking between helper containers, shared storage, monitoring, and restarts. Kubernetes pods handle all of this automatically.

Essential Pod Commands

kubectl run nginx --image=nginxCreate a pod named "nginx" from the nginx image
kubectl get podsList all pods with status and age
kubectl get pods -o wideSame + IP address and node info
kubectl describe pod <name>Full details -- events, IP, image, container ID
kubectl edit pod <name>Edit a running pod's configuration

Check Your Understanding

Scenario Your app is getting 10x more traffic than usual. You need more capacity fast.

What's the Kubernetes way to handle this?

When would you put two containers in the same pod?

03

ReplicaSets

The safety net that keeps the right number of pods alive

Why Do We Need Them?

Imagine a security team at a building. The contract says "always 3 guards on duty." If one calls in sick, a replacement is automatically dispatched. If the crowd grows, more guards are added. That's what a ReplicaSet does for your pods.

High Availability

Pod crashes? A new one is created automatically. Your app stays online.

Load Balancing

Distribute traffic across multiple identical pods so no single pod gets overwhelmed.

Easy Scaling

Change one number (replicas: 5) and Kubernetes adds or removes pods to match.

ReplicationController vs ReplicaSet

Both do the same job, but ReplicationController is the older version. ReplicaSet is newer and more powerful.

FeatureReplicationControllerReplicaSet
API Versionv1apps/v1
Selector TypeEquality only (=, !=)Set-based (In, NotIn, Exists)
Selector Required?No (auto-picks from template)Yes (must specify matchLabels)
StatusLegacy -- avoid in new projectsCurrent -- always use this one

ReplicaSet YAML Explained

YAML
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
spec:
  replicas: 3
  selector:
    matchLabels:
      type: front-end
  template:
    metadata:
      labels:
        type: front-end
    spec:
      containers:
        - name: nginx-container
          image: nginx
Plain English

Use the "apps" API group (ReplicaSets live here)

We're creating a ReplicaSet

Name it "myapp-replicaset" so we can refer to it later

Now the rules:

Always keep exactly 3 pods running

Find pods by their labels:

Look for pods labeled "type: front-end"

Here's the blueprint for each pod:

Label each new pod as "type: front-end" (must match the selector above!)

 

Each pod runs one container:

Named "nginx-container", using the nginx image

💡
Labels Are the Glue

In a cluster with hundreds of pods, labels and selectors are how a ReplicaSet identifies its pods. The selector must match the template's labels -- otherwise the ReplicaSet creates pods it can't find.

ReplicaSet Commands

kubectl create -f replicaset.yamlCreate a ReplicaSet from a YAML file
kubectl get rsList all ReplicaSets
kubectl scale rs myapp --replicas=5Scale up to 5 pods
kubectl describe rs <name>Detailed info about a ReplicaSet
kubectl delete rs <name>Delete ReplicaSet and its pods

Check Your Understanding

Scenario You create a ReplicaSet with replicas: 3 and selector matchLabels: app=web. But there are already 2 pods running with the label app=web that were created manually.

How many NEW pods will the ReplicaSet create?

04

Deployments & Probes

Zero-downtime updates and health checks that keep your app bulletproof

Deployments Wrap ReplicaSets

Think of renovating a hotel room by room. Guests never notice because there's always a room available. A Deployment does the same with your app -- it updates pods gradually so users never experience downtime.

1
ReplicaSet

Manages pod count (always keep N running)

2
Deployment

Wraps the ReplicaSet and adds rolling updates, rollbacks, and health checks

Rule of thumb

Never manage ReplicaSets directly -- always use Deployments

Update Strategies

StrategyBehaviorDowntime?
RollingUpdate (default)Old pods replaced gradually with new onesNo
RecreateAll old pods killed first, then new ones createdYes
YAML
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  minReadySeconds: 0
Plain English

Keep 3 pods running

When updating, do it gradually:

 

maxSurge 25%: Allow up to 1 extra pod during the update (3 + 1 = 4 max)

maxUnavailable 25%: At most 1 pod can be down at a time (3 - 1 = 2 min)

Don't wait after a new pod starts (0 seconds grace period)

Rolling Update Step by Step

Watch how Kubernetes replaces old pods (red) with new ones (green) -- one at a time, with zero downtime.

Old RS
v1
v1
v1
New RS
3 old pods running. Click "Next Step" to start the rolling update.
Step 0 / 6

Liveness vs Readiness Probes

Two health checks, two different questions. Think of a probe as a doctor's visit:

Liveness Probe: "Is the app alive?"

If it fails: Kubernetes restarts the container (inside the same pod). Like a defibrillator.

Readiness Probe: "Is the app ready for traffic?"

If it fails: Pod is removed from the Service (no traffic sent). Pod keeps running, probe keeps checking. One success = traffic resumes.

💡
Alive But Not Ready

An app can be alive but not ready. Example: your web server is running (liveness passes) but the database it needs is down (readiness fails). Kubernetes stops sending traffic but doesn't restart it -- because the container itself is fine.

Probe Configuration

initialDelaySecondsWait N seconds before the first check (give the app time to boot)
periodSecondsCheck every N seconds
timeoutSecondsIf no response in N seconds, that check fails
successThresholdN consecutive passes = healthy
failureThresholdN consecutive failures = action is taken

CrashLoopBackOff

When liveness fails repeatedly, Kubernetes restarts the container with increasing delay -- like a snooze alarm that waits longer each time. This state is called CrashLoopBackOff.

Backoff delay between restarts (the 5-minute cap is hardcoded -- you can't change it):

10s
20s
40s
80s
160s
300s (cap)

Kubernetes retries forever. To fix it: find the root cause, fix the code or config, then delete the pod or rollback the deployment.

Deployment Commands

kubectl rollout status deploy/<name>Watch update progress in real time
kubectl rollout history deploy/<name>View revision history
kubectl rollout undo deploy/<name>Rollback to previous version
kubectl rollout undo deploy/<name> --to-revision=1Rollback to a specific revision
kubectl rollout pause deploy/<name>Pause a rolling update mid-way

Check Your Understanding

Scenario You deploy a new version of your app. The new pods start up, but the readiness probe keeps failing because of a database configuration error. The old pods are still running fine.

What happens to the rolling update?

Your pod shows status "CrashLoopBackOff". What's the right next step?

05

Service Types

The stable address book that routes traffic to your ever-changing pods

Why Do We Need Services?

Imagine a phone directory for a company. Employees come and go, switch desks, get new extensions. But the directory number (e.g., "Sales: 555-0100") never changes. Callers always reach the right team. A Service is that directory for your pods.

Pods get new IP addresses every time they restart. If another app hardcodes that IP, it breaks. Services give pods a stable address that never changes.

Three Types of Services

Each type builds on the previous one, like nesting dolls.

ClusterIP (Internal Only)

Gets a stable virtual IP inside the cluster. Other pods use this IP or DNS name to reach the service. Not accessible from outside the cluster.

Pod A calls Pod B

Pod A -> ClusterIP (172.20.x.x) -> Pod B

DNS: service-name.namespace.svc.cluster.local

Side-by-Side Comparison

FeatureClusterIPNodePortLoadBalancer
AccessInternal onlyExternal via node IP:portExternal via single LB address
Port RangeAny30000-32767Any (LB handles it)
Load Balances Pods?Yes (round-robin)YesYes
Load Balances Nodes?N/ANoYes
Cloud Required?NoNoYes
Builds On--ClusterIPNodePort + ClusterIP
Use CaseInternal microservicesDev/testingProduction external

Traffic Flow: Group Chat Edition

Watch how traffic flows through a LoadBalancer service to reach a pod.

Traffic Flow Chat
0 / 5 messages

Check Your Understanding

Scenario You have a frontend app that needs to call a backend API. Both run as pods in the same Kubernetes cluster. The backend doesn't need to be accessible from outside.

Which service type should you use for the backend?

You create a NodePort service. One of the 3 nodes goes down. What happens to clients connecting to that node's IP?

06

Hands-On: KubeBot

See every concept in action with a real project you can deploy

What Is KubeBot?

KubeBot is an interactive chatbot that teaches Kubernetes concepts. It's built with Node.js, runs in a Docker container, and deploys to Kubernetes. The project itself demonstrates the concepts it teaches.

Kube-Test/
dockerfile -- builds the container image
src/
app.js -- the chatbot application (chat UI + API + health endpoints)
k8-applications/
configmap.yml -- bot name, welcome message, environment
secret.yml -- API key (sensitive data)
deployment.yml -- 3 replicas, rolling updates, probes
service-clusterip.yml -- internal service
service-nodeport.yml -- external access on port 30080

The Dockerfile

DOCKERFILE
FROM node:18-alpine
WORKDIR /app
COPY src/app.js .
EXPOSE 3000
CMD ["node", "app.js"]
Plain English

Start from a tiny Linux image with Node.js 18 pre-installed (~50MB)

Set /app as the working directory inside the container

Copy our app code from the host into the container

Tell Docker this container listens on port 3000

When the container starts, run "node app.js"

The Deployment (Real YAML)

This is the actual deployment.yml from the project. Notice how it uses everything we learned: replicas, rolling updates, and probes.

deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubebot
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kubebot
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
Plain English

Use the apps API group

Create a Deployment (wraps a ReplicaSet)

Name it "kubebot"

The rules:

Always keep 3 copies running (high availability!)

Find our pods by the label "app: kubebot"

 

When updating, do it gradually:

Max 1 extra pod during updates (25% of 3 ~ 1)

Max 1 pod can be down during updates

Health Probes in the Real Code

The deployment YAML points probes at /health and /ready. Here's the actual code that handles those checks:

app.js
// Health endpoint (liveness probe)
if (req.url === '/health') {
  res.writeHead(200, {'Content-Type': 'application/json'});
  return res.end(JSON.stringify({status: 'UP'}));
}

// Ready endpoint (readiness probe)
if (req.url === '/ready') {
  res.writeHead(200, {'Content-Type': 'application/json'});
  return res.end(JSON.stringify({status: 'READY'}));
}
Plain English

When Kubernetes asks "are you alive?":

If someone visits /health...

Send back a 200 (success) response

With the message {"status": "UP"}

 

When Kubernetes asks "are you ready for traffic?":

If someone visits /ready...

Send back a 200 response

With {"status": "READY"}

📚
Probe + Code Connection

The deployment.yml says httpGet: path: /health. Kubernetes periodically hits that URL. The app.js code above responds with status 200. Kubernetes sees 200 and says "all good." Any other status (or no response) = probe failure.

KubeBot's Services

The project has two services -- one for internal access, one for external.

service-clusterip.yml
apiVersion: v1
kind: Service
metadata:
  name: kubebot-internal
spec:
  type: ClusterIP
  selector:
    app: kubebot
  ports:
    - port: 80
      targetPort: 3000
Plain English

Core Kubernetes API

Create a Service

Call it "kubebot-internal"

 

Internal-only (ClusterIP is the default)

Route traffic to pods labeled "app: kubebot"

 

Listen on port 80 (standard HTTP)

Forward to port 3000 on the pod (where app.js listens)

service-nodeport.yml
apiVersion: v1
kind: Service
metadata:
  name: kubebot-nodeport
spec:
  type: NodePort
  selector:
    app: kubebot
  ports:
    - port: 80
      targetPort: 3000
      nodePort: 30080
Plain English

Core Kubernetes API

Create a Service

Call it "kubebot-nodeport"

 

Expose externally via NodePort

Route to pods labeled "app: kubebot"

 

Internal port 80

Pod listens on 3000

External access on port 30080 (any node's IP:30080 works!)

Deploy It Yourself

1
Build the Docker image

docker build -t kubebot:v1 .

2
Apply all Kubernetes manifests

kubectl apply -f k8-applications/

3
Check the deployment

kubectl get pods -- you should see 3 pods running

4
Access KubeBot

kubectl port-forward svc/kubebot-internal 3000:80 then open localhost:3000

5
Try it out!

Type "pod", "service", "deployment", or "help" in the chat

Final Check

Scenario You want to change KubeBot's welcome message from the current text to "Welcome to K8s Academy!" without redeploying the app.

Which Kubernetes resource would you update?

You Made It!

You now understand Kubernetes architecture, pods, ReplicaSets, deployments with rolling updates and probes, and all three service types. Clone the KubeBot project, deploy it, and keep experimenting.

🏆
What's Next?

Try scaling KubeBot to 5 replicas, triggering a rolling update with a new image tag, or deleting a pod and watching Kubernetes recreate it. The best way to learn is to break things on purpose.