This content originally appeared on DEV Community and was authored by Kosisochukwu Ugochukwu
INTRODUCTION
What is Kubernetes?
In simple terms Kubernetes is like a traffic controller for your apps.
For example imagine you have a bunch of shipping containers, each with part of your app inside (like a website, a database, an API, etc.). These containers need to run on servers. But managing all of them manually, starting them, stopping them, moving them if something breaks is a pain. So Kubernetes is the system that takes care of all that automatically.
What it does for you:
Starts your app for you.
Keeps it running if something crashes.
Puts it on the best available server.
Then create more copies of it if needed.
Why Use Kubernetes?
Think of Kubernetes like the manager of a busy restaurant kitchen:
You (the app owner) give the manager a recipe (your deployment file).
The manager (Kubernetes) decides how many cooks (containers) to assign.
If one cook burns out (a pod crashes), the manager replaces them automatically.
If a lot of customers show up (high traffic), the manager brings in more cooks (auto-scaling).
If you want to change the recipe (new app version), the manager makes the switch smoothly so customers don’t even notice (rolling updates).
Benefit:
Using Kubernetes means you can spend less time fixing broken stuff and more time building your app. It handles the boring, repetitive parts of running apps in production.
Things you need to get you started is:
Docker installed, basic CLI knowledge, Minikube or similar kind setup, etc.
Module 1 – Create a Kubernetes Cluster
In this module, we will set up a local Kubernetes cluster for development purposes using Minikube. This allows you to run and test Kubernetes workloads on your own machine without needing a cloud provider.
Step 1 – Start Your Local Kubernetes Cluster
We will begin by launching a local Kubernetes cluster using Minikube. This creates a single node cluster on your machine, which acts like a small, self-contained version of a full Kubernetes setup. It gives you a safe environment to experiment, deploy apps, and learn how Kubernetes works all without needing internet access or a cloud account.
If you haven’t installed Minikube yet, go ahead and install it first at https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Fx86-64%2Fstable%2Fbinary+download#Service. Once that’s done, you can confirm it was installed correctly by running the following command in your terminal: minikube version
Once Minikube is installed, you can start your local Kubernetes cluster by running the following command: minikube start
This command sets up a single-node cluster on your machine, which will act as your personal Kubernetes environment for development and testing.
Good! we now have a running Kubernetes cluster on our machine.
Minikube has created a virtual environment for us and launched a single-node Kubernetes cluster inside it. This environment behaves just like a real Kubernetes setup, letting you deploy and manage applications locally.
Step 2 – Check the Cluster Version
To interact with your Kubernetes cluster, we will use the command-line tool called kubectl. It’s the main way we will manage and communicate with our cluster.
Do not worry we will go deeper into how kubectl works later, but for now, let’s use it to view some basic information about the cluster.
First, check that kubectl is installed and working by running: kubectl version
This command will show the version details for both the client (our machine) and the server (the Kubernetes cluster running in Minikube).
As you can see kubectl is now configured correctly!
When you run kubectl version, you will see two main pieces of information:
Client Version: This is the version of the kubectl tool installed on your machine.
Server Version: This is the version of Kubernetes running on your Minikube cluster (specifically on the master node).
Step 3 – View Cluster Details
Now that our cluster is up and running, let’s take a look at some basic information about it.
You can do this by running the following command: kubectl cluster-info
This command shows important details about your Kubernetes cluster, such as the URL of the Kubernetes control plane (also known as the API server) and other key components running inside the cluster.
It’s a quick way to confirm that your cluster is active and responding to commands.
Throughout this tutorial, we will mainly use the command line to deploy and explore our application.
To see the nodes available in your Kubernetes cluster, run:
kubectl get nodes
This command lists all the nodes that your cluster can use to run applications.
Since we are using Minikube, you’ll see just one node, the local virtual machine Minikube started for us.
As you can see the node’s status says Ready, that means it’s healthy and available to run our apps.
Module 2 – Deploying Your First App
In this section, you’ll learn how to deploy your first application on Kubernetes using the kubectl command-line tool.
The goal is to get hands-on experience with the basic kubectl commands and understand how to interact with your app once it’s running in the cluster. You will see how easy it is to launch, inspect, and manage applications with just a few commands.
Step 1 – Getting Started with kubectl
To begin working with Kubernetes from the command line, we will use a tool called kubectl.
You can type the following in your terminal to see a list of available commands: kubectl
The typical format of a kubectl command looks like this: kubectl <action> <resource>
-
action
: What you want to do (e.g., create, get, describe) -
resource
: What you are acting on (e.g., pods, nodes, deployments)
To get more details about any command, you can add the --help
flag. For example: kubectl get nodes --help
Check Your Setup
To make sure kubectl is properly set up to talk to your Kubernetes cluster, run: kubectl version
This should show both:
- Client Version – the version of kubectl on your machine
- Server Version – the version of Kubernetes running in your Minikube cluster
View Your Nodes
Now let’s check the node(s) in the cluster:kubectl get nodes
This command lists the available nodes. Since we are using Minikube, you will only see one node. Kubernetes will use this node to schedule and run your application based on the resources it has available (like CPU and memory).
Step 2 – Deploying Your App
Let’s go ahead and deploy your first application on Kubernetes using the kubectl create deployment command.
This command needs two things:
- A name for your deployment.
- The image you want to deploy (this is the container that runs your app, include the full repository url for images hosted outside Docker Hub).
Here’s the command: kubectl create deployment kubernetes-bootcamp --image=gcr.io/k8s-minikube/kubernetes-bootcamp:v1
What just happened?
By running that single command, Kubernetes did a lot behind the scenes:
- Found a suitable node to run your app (we only have one, so it chose that one).
- Started the app inside a container on that node.
- Set things up so that if the container crashes or the node fails, Kubernetes will automatically restart or reschedule it.
View Your Deployment
To see the list of active deployments, run: kubectl get deployments
You will see one deployment running, this is your app, and it’s running inside a Docker container managed by Kubernetes.
Step 3 – Viewing Your App Inside the Cluster
By default, applications (called Pods) running in Kubernetes are on a private network. This means:
- They can talk to each other inside the cluster.
- But they are not accessible from the outside, including your browser or host machine.
When we use kubectl, we are talking to the Kubernetes API server, which acts as a bridge between us and the cluster.
Creating a Temporary Connection (Using a Proxy)
We will explore how to make your app publicly accessible in Module 4, but for now, we can use kubectl proxy to temporarily connect to your app from your local machine.
This proxy:
- Lets your terminal access the cluster’s internal API endpoints.
- Doesn’t show output while running.
- Can be stopped anytime with
Control + C
.
To keep things organized, it’s best to open a second terminal window or tab, and run: echo -e "Starting Proxy. After starting it will not output a response. Please return to your original terminal window\n"; kubectl proxy
This sets up a connection from your local terminal to the Kubernetes cluster via port 8001
Testing the Proxy with curl
Now that the proxy is running, you can test it by accessing the Kubernetes API.
In your original terminal, run: curl http://localhost:8001/version
Note: The proxy was run in a new tab, and the recent commands were executed in the original tab. The proxy still runs in the second tab, and this allowed our curl command to work using localhost:8001.
If this doesn’t work, make sure
kubectl proxy
is still running in your second terminal tab.
Accessing Your Pod Through the API
Kubernetes gives every Pod a unique endpoint through the API.
To access it, you first need to get your Pod’s name and we will store in the environment variable POD_NAME:
Run this in your original terminal:export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME
You can access the Pod through the API by running: curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME
Note on Public Access
Right now, this setup only works through the proxy (on localhost:8001).
If you want your app to be accessible without using a proxy, for example, from a browser or external system then you will need to create a Service, which we will cover in the next module.
Module 3 – Exploring Your App
In this module, you will learn how to inspect and troubleshoot applications running in Kubernetes using a few key kubectl commands:
-
kubectl get
– List resources like pods, services, deployments. -
kubectl describe
– Show detailed information about a specific resource. -
kubectl logs
– View the output (logs) from inside a container. -
kubectl exec
– Run commands inside a running container, like you’re SSH-ing into it.
These tools are essential for debugging and understanding what’s going on inside your cluster when something doesn’t seem right, or when you just want to confirm everything is working correctly.
Step 1 – Check if Your App is Running
To verify that your application is up and running, we will use the kubectl get command to check for existing Pods.
In your terminal, run: kubectl get pods
This command lists all the Pods currently running in your cluster. You should see one Pod, the one created by your deployment in the previous module.
Also you can see a status like saying Running in the output. That means the application is successfully running inside the Pod.
To get a deeper look into what’s happening inside your Pod, like what containers it’s running, what image it’s using, and its configuration you can use the kubectl describe command: kubectl describe pods
This command shows detailed information about your Pod, including:
- The container image being used
- The Pod’s internal IP address
- Ports that are exposed
- Events related to the Pod’s lifecycle (like when it started or if there were any errors)
The output can be a bit long and may include concepts we haven’t covered yet but don’t worry. As you move through this article, everything will start to make more sense.
Tip: kubectl describe works with many types of Kubernetes resources (like nodes, pods, and deployments) and is meant to be human readable, not used for scripting.
Step 2 – View Your App in the Terminal
As a reminder, Pods in Kubernetes run inside a private network, they are isolated from the outside world. To interact with a Pod directly (for debugging or testing), we need a way to access that internal network.
We will do this by using kubectl proxy
, which creates a temporary connection between your local machine and the Kubernetes cluster.
Start the Proxy
Open a second terminal window and run this command: echo -e "Starting Proxy. After starting it will not output a response. Please return to your original terminal window\n"; kubectl proxy
What this command does?
- It starts a local proxy server on port 8001.
- It runs silently (no output) and stays active until you press Control + C to stop it.
Get the Pod Name
Back in your original terminal, run the following to get the name of your Pod and store it in an environment variable: export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME
This sets a variable called POD_NAME with the name of the Pod that’s running your app.
Query the Pod Using curl
Now that the proxy is running and we have the Pod name, we can make a direct request to the Pod’s API using: curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME
This command above:
- Sends a request through the proxy
- Reaches into the Kubernetes cluster
- Returns details about your running Pod
This is a great way to see your application’s internal state directly from the terminal.
Step 3 – View the Logs from Your Container
In Kubernetes, any output your application sends to standard output (STDOUT) becomes part of the container logs. This is useful for checking what your app is doing, debugging issues, or just seeing printed messages (like logs or errors).
To view the logs from your running container, use this command: kubectl logs $POD_NAME
Since there’s only one container inside the Pod, you don’t need to specify the container name, Kubernetes knows which one to fetch logs from.
This command will show you everything the app has printed since it started, just like console.log
, print()
, or System.out.println()
in a normal app.
Step 4 – Running Commands Inside the Container
Once your Pod is up and running, you can interact directly inside the container, that is almost like SSH-ing into a running app. This is helpful for debugging, inspecting files, or running manual tests.
Check Environment Variables
Let’s start by running a simple command to list all environment variables inside the container: kubectl exec $POD_NAME -- env
This uses kubectl exec
to execute the env
command inside your Pod.
Since our Pod only has one container, we don’t need to specify its name, because kubernetes knows which one to target.
Start a Shell Session (Bash)
To open a live terminal inside the container, run: kubectl exec -ti $POD_NAME -- bash
-
-ti
allows you to run in interactive mode (just like SSH). - You are now inside the container’s shell and can run commands as if you were on a Linux server.
View the App Source Code
Your app is a simple Node.js app. You can check the source code by running: cat server.js
As you can see it printed the contents of the server.js file that’s running in the container.
Test the Running App
While still inside the container, test that the app is responding by curling localhost: curl localhost:8080
You should get a response from the app, as seen above usually the same one you’d see if it were exposed in a browser.
Note: We use localhost here because you’re inside the Pod’s network. If this doesn’t work, doublecheck that you’re running the command inside the container shell.
Exit the Container
When you’re done, type:exit
This will close the shell and return you to your local terminal.
Module 4 – Expose Your App to the Outside World
In this module, we will learn how to make your application accessible outside the Kubernetes cluster, so it’s no longer limited to just internal access.
Here’s what we’ll cover:
- How to expose your app using the kubectl expose command
- How to label Kubernetes resources using the kubectl label command (labels help group and organize your resources) By the end of this module, you’ll have your Node.js app reachable via a public URL and you’ll understand how to apply custom tags (labels) to your deployments and Pods.
Step 1 – Expose Your App by Creating a Service
Until now, your application has only been running inside the Kubernetes cluster, which means it’s not accessible from the outside world.
In this step, we’ll expose the app publicly using a Service of type NodePort
, which opens a port on the cluster’s Node and allows external traffic to reach your app.
Check if Your App is Still Running
Run this command to make sure the Pod is active: kubectl get pods
List Current Services
Kubernetes might already have some default Services (like one for DNS). Run: kubectl get services
You’ll probably see a kubernetes
service created automatically by Minikube.
Expose Your App with a New Service
Now let’s create a new Service that exposes our app externally on port 8080: kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
This creates a new service and maps it to the deployment we created earlier.
Check That the New Service Was Created
Run: kubectl get services
You should now see a new service called kubernetes-bootcamp.
- It has a Cluster IP (internal network address).
- It also has a NodePort, which makes it accessible from outside the cluster.
For Docker Desktop Users (Important Note)
If you’re using Docker Desktop instead of Minikube, accessing NodePorts directly may not work due to networking restrictions.
Instead, run this command: minikube service kubernetes-bootcamp
As you can see this command will:
- Open a browser window with your app
- Create an SSH tunnel from the Node to your host
- Allow access to your app externally
To close the tunnel, just hit Control + C.
Get the External Port (NodePort)
Let’s find out the actual port Kubernetes opened to the outside:
kubectl describe services/kubernetes-bootcamp
Create an environment variable called NODE_PORT that has the value of the Node port assigned: export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT
Access the App from Your Host Machine
Now test your app using curl
, combining Minikube’s IP address with the external port: curl $(minikube ip):$NODE_PORT
If everything is set up correctly, you should see a response from your Node.js app!
Congratulations!!!! our app is now publicly accessible!
Step 2 – Using Labels in Kubernetes
When we created our app using a Deployment, Kubernetes automatically added a label to the Pod. Labels are like name tags, they help us group, filter, and organize resources inside the cluster.
Check the Pod label:
We can look at the deployment details to see which labels were added: kubectl describe deployment
You’ll see something like app=kubernetes-bootcamp
listed under Labels.
Use the label to find the Pod:
Now that we know the label, we can use it to search for our Pod:
kubectl get pods -l app=kubernetes-bootcamp
The command above tells Kubernetes:
“Show me all the Pods that have the label app=kubernetes-bootcamp
.”
You can do the same thing for services: kubectl get services -l app=kubernetes-bootcamp
Store the Pod name:
Let’s grab the Pod’s name and store it in a variable so we can use it easily in the next commands: export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME
Add a new label to the Pod:
We can attach a custom label to the Pod. In this case, we’re labeling the version of the app: kubectl label pods $POD_NAME version=v1
This adds a label version=v1
to the Pod — super useful for organizing and managing deployments.
Check if the new label was applied:
Let’s confirm the label is attached: kubectl describe pods $POD_NAME
You’ll now see the new label listed under the Pod’s metadata.
Filter using the new label:
Now that we added the version=v1 label, We see here that the label is attached new to our Pod. Now we can use it to filter the Pod list: kubectl get pods -l version=v1
This is helpful when you have lots of Pods and want to target only specific ones, for example, by version, environment (dev
, prod
), or anything else you define with labels.
Step 3 – Deleting a Kubernetes Service
Now that we have exposed our app using a Service, let’s learn how to delete that Service when we no longer need it.
Delete the service:
We can remove the Service using the following command. It uses the same label we applied earlier: kubectl delete service -l app=kubernetes-bootcamp
This tells Kubernetes: Delete any Service that has the label app=kubernetes-bootcamp
.
Make sure the Service is gone:
After deletion, confirm it with: kubectl get services
You should see that the custom Service is no longer listed, just the default Kubernetes service remains.
Test if the app is still exposed:
Try accessing the app from outside the cluster using the IP and NodePort you used earlier: curl $(minikube ip):$NODE_PORT
You’ll get an error now and that’s expected!
This shows that the app is no longer exposed to the outside world and that the route is closed.
Check if the app is still running inside the cluster:
Even though the service is gone, the app is still running internally. We can confirm that by sending a request from inside the Pod: kubectl exec -ti $POD_NAME -- curl localhost:8080
And wow, you should see the app respond.
This is because your Deployment is still running and managing the app inside Kubernetes. The Pod exists, but without a Service, it’s not reachable from outside.
Optional: Want to completely stop the app?
To shut it all down not just make it private you’d need to delete the Deployment too. That would stop the app entirely.
Module 5 – Scale Up Your App
In this module, we will learn how to increase the number of running copies (called replicas) of our app using Kubernetes. This is known as scaling.
Why scale?
Let’s say your app starts getting more traffic. One Pod (a running instance of your app) might not be enough to handle all the requests. So, we tell Kubernetes to run more Pods like hiring extra workers for a busy day.
When you scale up, Kubernetes automatically:
- Creates more Pods from your Deployment
- Spreads the traffic across all Pods (this is called load balancing)
Step 1 – Scaling a Deployment (Making More Copies of Your App)
Let’s now learn how to increase the number of app instances (called replicas) running in your cluster.
First, check how many app instances (replicas) you currently have:
kubectl get deployments
You should see something like this:
Here’s what those columns in the screenshot above mean:
- NAME: Name of your deployment.
- READY: How many Pods are currently running out of how many you want (e.g., 1/1).
- UP-TO-DATE: How many of those Pods are using the most recent version of your app.
- AVAILABLE: How many Pods are ready and available to users.
- AGE: How long the deployment has been running.
Want to see the ReplicaSet (the part of Kubernetes that manages Pods)? run: kubectl get rs
Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]
–[RANDOM-STRING]
. The random string is randomly generated and uses the pod-template-hash as a seed. As shown above kubernetes-bootcamp-579cb4f987
This shows:
- DESIRED: How many Pods you want.
- CURRENT: How many are actually running now.
Let’s scale up to 4 instances of the app.
let’s scale the Deployment to 4 replicas. We will use the kubectl scale command, following by the deployment type, name and desired number of instances: kubectl scale deployments/kubernetes-bootcamp --replicas=4
Note: This command tells Kubernetes:
“Run 4 copies of this app instead of just 1.”
Confirm it worked:
Run this again: kubectl get deployments
You’ll now see:
The change was applied, and we have 4 instances of the application available. Next, let’s check if the number of Pods changed: kubectl get pods -o wide
Above yuou should see 4 different Pods, each with its own IP address.
Want details about what happened behind the scenes?
You can inspect the deployment: kubectl describe deployments/kubernetes-bootcamp
This shows events and confirms that the system added the new Pods successfully.
Step 2 – Load Balancing (Distributing Traffic Between Pods)
Now let’s check if our Service is properly sharing traffic across all the app replicas (Pods).
Step 1: Get the Service details
Run this to see the exposed IP and port of your app’s service: kubectl describe services/kubernetes-bootcamp
Note for Docker Desktop users on macOS:
Because of network restrictions, your host can’t directly talk to the Pods.
Run this instead to open a tunnel and launch your app in a browser: minikube service kubernetes-bootcamp
This opens a window in your browser. Keep refreshing the page, each refresh will be served by a different Pod!
You can stop the tunnel by pressing control + C in the terminal.
Step 2: Get the port that’s been exposed by the Service
Let’s create an environment variable so we don’t need to type the port each time: export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT
Step 3: Test the load balancer
Now run this command several times: curl $(minikube ip):$NODE_PORT
Each time you run it, you should see a response from a different Pod.
That means Kubernetes is distributing (load-balancing) your requests across all running Pods.
Step 3 – Scale Down
To scale down the Service to 2 replicas, run again the scale command: kubectl scale deployments/kubernetes-bootcamp --replicas=2
List the Deployments to check if the change was applied with the get deployments command: kubectl get deployments
The number of replicas decreased to 2. List the number of Pods, with get pods: kubectl get pods -o wide
Shown above confirms that 2 Pods were terminated.
Module 6 – Updating Your Application
In this section, you’ll learn how to update an application that’s already been deployed in Kubernetes. You’ll use the kubectl set image command to apply the update and kubectl rollout undo to revert changes if necessary.
Step 1 – Upgrading the Application Version
Begin by checking the current deployments: kubectl get deployments
Then, verify which Pods are currently running: kubectl get pods
To confirm the image version currently in use by the application, inspect the Pod details: kubectl describe pods
To update the image of the application to version 2, use the set image
command, followed by the deployment name and the new image version: kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/k8s-minikube/kubernetes-bootcamp:v2
This will trigger a rolling update, replacing the old Pods with new ones that use the new image. You can track the update by listing the Pods again: kubectl get pods
You’ll see the new Pods being created while the previous ones terminate.
Step 2 – Confirm the Application Has Been Updated
To begin, ensure the application is still running correctly. First, determine the exposed IP and port by describing the service:
kubectl describe services/kubernetes-bootcamp
Note for Docker Desktop users: Since direct Pod access from the host is restricted, use this command to open a tunnel and access the app in your browser: minikube service kubernetes-bootcamp
You can close the tunnel by pressing Control+C. Once done, proceed with the curl command shown below.
Next, Create an environment variable called NODE_PORT
that has the value of the Node port assigned:: export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT
Next, do a curl to the exposed IP and port, Now send a request to the application using the IP and Node port:
curl $(minikube ip):$NODE_PORT
Each time you run the command, you’ll likely hit a different Pod. Notice that all Pods are running the latest version (v2).
You can also confirm the update by running the rollout status command: kubectl rollout status deployments/kubernetes-bootcamp
Finally, confirm the image version by inspecting the Pod details:
kubectl describe pods
Check the Image field to ensure and notice it reflects version v2.
Step 3 – Rollback an update (Revert to a Previous AppVersion)
Let’s simulate a failed deployment by attempting to update the app with a non-existent image version (v10): kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/k8s-minikube/kubernetes-bootcamp:v10
Check the deployment status: kubectl get deloyments
Notice that the output doesn’t list the desired number of available Pods. Run the get pods command to list all Pods: kubectl get pods
Notice some Pods are showing a status like ImagePullBackOff
, indicating that Kubernetes is unable to fetch the specified image.
To understand what went wrong, use the following command to view Pod details: kubectl describe pods
In the Events section, you’ll see errors confirming that the image tagged v10
couldn’t be found in the registry.
To recover from this failed rollout, you can revert to the last stable version (v2) using: kubectl rollout undo deployments/kubernetes-bootcamp
This command tells Kubernetes to roll the Deployment back to the most recent successful configuration.
Now, verify that the Pods are back to normal: kubectl get pods
You should see the healthy Pods again. To confirm they’re using the correct image, run: kubectl describe pods
The deployment is once again using a stable version of the app (v2). Showing that the rollback was successful.
This content originally appeared on DEV Community and was authored by Kosisochukwu Ugochukwu