This content originally appeared on DEV Community and was authored by Latchu@DevOps
In this guide, weβll learn how to deploy a simple Pod in Google Kubernetes Engine (GKE) and expose it to the outside world using a LoadBalancer Service β all in a declarative way (using YAML manifests).
Step 01: Understanding Kubernetes YAML Top-Level Objects
Every Kubernetes resource is defined in a YAML manifest.
The top-level objects youβll see in almost every YAML file are:
apiVersion: # Defines the API version to use (e.g., v1, apps/v1)
kind: # Type of Kubernetes resource (Pod, Service, Deployment, etc.)
metadata: # Information about the resource (name, labels, namespace)
spec: # Desired state (configuration details)
Think of it like this:
- apiVersion β which API group to use.
- kind β what resource you want.
- metadata β how Kubernetes identifies it.
- spec β how it should behave.
Step 02: Create a Simple Pod Definition
Letβs create a Pod that runs a simple Nginx-based application.
Create a directory for manifests:
mkdir kube-manifests
cd kube-manifests
File: 01-pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp
image: stacksimplify/kubenginx:1.0.0
ports:
- containerPort: 80
Key points:
- We label the Pod with app: myapp.
- The container runs on port 80.
Apply it:
kubectl apply -f 01-pod-definition.yaml
kubectl get pods
Step 03: Create a LoadBalancer Service
Now that we have a Pod, letβs expose it using a LoadBalancer Service.
This Service will provision a GCP external load balancer and route traffic to our Pod.
File: 02-pod-LoadBalancer-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-pod-loadbalancer-service
spec:
type: LoadBalancer
selector:
app: myapp # matches our Pod label
ports:
- name: http
port: 80 # Service port (external entry point)
targetPort: 80 # Container port inside Pod
Flow of traffic:
[Internet User] β [GCP LoadBalancer] β [K8s Service Port 80] β [Pod Container Port 80]
Apply it:
kubectl apply -f 02-pod-LoadBalancer-service.yaml
kubectl get svc
Look for an External IP assigned to your Service.
Test it:
curl http://<Load-Balancer-External-IP>
You should see the Nginx response page
Step 04: Clean Up
When done, delete the resources:
kubectl delete -f 01-pod-definition.yaml
kubectl delete -f 02-pod-LoadBalancer-service.yaml
Summary
- Pods are the smallest deployable unit in Kubernetes.
- Services provide a stable endpoint to access Pods.
- A LoadBalancer Service in GKE creates a GCP Load Balancer and exposes your app to the internet.
This is the simplest way to expose a Pod to the outside world.
Thanks for reading! If this post added value, a like
, follow, or share would encourage me to keep creating more content.
β Latchu | Senior DevOps & Cloud Engineer
AWS | GCP |
Kubernetes |
Security |
Automation
Sharing hands-on guides, best practices & real-world cloud solutions
This content originally appeared on DEV Community and was authored by Latchu@DevOps