Building a Remote-Accessible Kubernetes Home Lab with k3s



This content originally appeared on DEV Community and was authored by Buun ch.

Turn a mini PC into your personal Kubernetes development environment accessible from anywhere in the world!

Introduction

Developers today face a common dilemma: the need for a persistent Kubernetes environment without the high costs of cloud services or the battery drain of running containers locally.

Kubernetes has become essential for orchestrating multiple services running in containers. However, cloud services like AWS, Azure, or GCP can be prohibitively expensive for personal projects or learning environments. Meanwhile, running Docker and Kubernetes on a development laptop quickly drains the battery, particularly when working remotely.

This guide demonstrates how to build a Kubernetes cluster on a mini PC at home or in your office, creating a development environment accessible from anywhere via the internet.

Choosing the Right Kubernetes Distribution

Several Kubernetes distributions are available for local development:

  • minikube and kind: These tools excel at quickly launching clusters for clean testing environments but lack the stability required for long-term development or production use
  • Microk8s: Built on Snap package management, Microk8s is designed specifically for development environments with comprehensive tooling support
  • k3s: Features a single-binary installer optimized for resource-constrained environments

While both Microk8s and k3s suit development environments well, k3s has experienced rapid growth in community adoption. GitHub star history shows k3s’s exceptional popularity trajectory, validating its selection for this home lab setup.

GitHub star history

Required Cloud Services

A self-hosted Kubernetes cluster still requires certain cloud services:

  1. Domain Registrar: Essential for registering and managing domain names
  2. Tunneling Service: Enables secure internet access to the cluster (Cloudflare Tunnel serves this purpose)
  3. Container Registry: Necessary for storing container images, as pushing large Docker images through Cloudflare Tunnel from home networks presents bandwidth limitations

Setting Up the Linux Machine

The setup begins with preparing a Linux machine accessible via SSH from the development workstation. The initial configuration involves:

  1. Installing Linux with Docker support
  2. Configuring SSH daemon for remote access
  3. Setting up passwordless sudo execution (a requirement for k3sup)

Arch Linux

Arch Linux users must configure sshd to support keyboard-interactive authentication with PAM. Create the following file.

/etc/ssh/sshd_config.d/10-pamauth.conf:

KbdInteractiveAuthentication yes
AuthenticationMethods publickey keyboard-interactive:pam

After creating this file, restart the sshd service to apply the changes.

Create the sudoers file for your account. For example, if your account name is buun, create the following file.

/etc/sudoers.d/buun:

buun ALL=(ALL:ALL) NOPASSWD: ALL

Installing Required Tools

The local development machine requires specific tooling for cluster management. Begin by cloning the repository:

git clone https://github.com/buun-ch/buun-stack
cd buun-stack

The project uses mise for tool version management. Follow the Getting Started guide to install it.

After installing mise, install all required tools:

mise install
mise ls -l  # Verify installed tools

The toolchain includes:

  • gomplate: Template engine for generating configuration files
  • gum: Interactive CLI for user input collection
  • helm: Kubernetes package manager
  • just: Task runner organizing installation commands as recipes
  • kubelogin: kubectl authentication plugin
  • vault: HashiCorp Vault CLI client

Creating the Kubernetes Cluster

With the toolchain ready, generate the configuration file:

just env::setup  # Creates .env.local with your configuration

This interactive command collects necessary information and generates the .env.local file containing environment variables for subsequent operations.

Deploy the k3s cluster:

just k8s::install
kubectl get nodes  # Verify cluster status

The installation leverages k3sup to deploy k3s on the remote machine while automatically creating/modifying kubeconfig (~/.kube/config) on your local machine.

Configuring Cloudflare Tunnel

Cloudflare Tunnel provides secure internet access to the cluster. This example assumes a domain with DNS managed by Cloudflare.

In the Cloudflare dashboard:

  1. Navigate to Zero Trust > Network > Tunnels
  2. Click “+ Create a tunnel”
  3. Click “Select Cloudflared”
  4. Enter the name of your tunnel
  5. Click “Save tunnel”

If your Linux is based on Debian or Red Hat, follow the instructions displayed in the page.

If you are using Arch Linux, install cloudflared with:

paru cloudflared

and create the systemd unit file:

sudo systemctl edit --force --full cloudflared.service

Copy and paste the systemd unit file content from the page Configure cloudflared parameters · Cloudflare Zero Trust docs

[Unit]
Description=Cloudflare Tunnel
After=network.target

[Service]
TimeoutStartSec=0
Type=notify
ExecStart=/usr/bin/cloudflared tunnel --loglevel debug --logfile /var/log/cloudflared/cloudflared.log run --token <TOKEN VALUE>
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target
  • Edit the path /usr/local/bin -> /usr/bin
  • Replace <TOKEN VALUE> with your token
    • The token is shown in the tunnel overview page

Public host names

Configure the following public hostnames:

  • ssh.yourdomain.com → SSH localhost:22
  • vault.yourdomain.com → HTTPS localhost:443 (No TLS Verify)
  • auth.yourdomain.com → HTTPS localhost:443 (No TLS Verify)
  • k8s.yourdomain.com → HTTPS localhost:6443 (No TLS Verify)

Unless you are building a zero-trust network, you can enable “No TLS Verify” because only Cloudflare can reach your local machine.

SSH

Here is an example of SSH configuration for macOS.

brew install cloudflared

Create ~/.ssh/config:

Host yourdomain
  Hostname ssh.yourdomain.com
  ProxyCommand /opt/homebrew/bin/cloudflared access ssh --hostname %h

Installing Core Components

Before setting up Kubernetes remote access, install the following components.

Longhorn – Distributed Storage

Longhorn provides distributed block storage for Kubernetes, Development environments benefit from its ability to create PersistentVolumes backed by NFS exports, enabling work with large datasets stored on network-attached storage.

Prerequisites: Install open-iscsi on the Linux machine and ensure the iscsid service is running.

For example, if you are using Arch Linux:

ssh your-linux-machine

sudo packman -S open-iscsi
sudo systemctl enable iscsid
sudo systemctl start iscsid

In your local machine, run:

just longhorn::install

HashiCorp Vault – Secrets Management

Vault serves as the central secrets management system, handling encryption keys and sensitive data across all applications in the cluster.

just vault::install  # Store the root token securely

PostgreSQL – Database Cluster

PostgreSQL provides relational database services for Keycloak and application data storage:

just postgres::install

Keycloak – Identity Management

Keycloak delivers comprehensive identity and access management, providing authentication and single sign-on capabilities for both applications and the Kubernetes API:

just keycloak::install

Configuring OIDC Authentication

Create the Keycloak realm:

just keycloak::create-realm

The default realm name is buunstack. You can change it by editing .env.local:

KEYCLOAK_REALM=your-realm

Configures Vault OIDC integration:

just vault::setup-oidc-auth

Creates initial user account:

just keycloak::create-user

Enables Kubernetes OIDC authentication:

just k8s::setup-oidc-auth

This creates a new kubectl context with OIDC-based authentication. If the original context is named minipc1, the OIDC context is created as minipc1-oidc.

Testing the Setup

Validate the OIDC authentication configuration:

kubectl config use-context minipc1-oidc
kubectl get nodes

The cluster is now accessible from anywhere via the internet.

Verify full functionality by testing pod operations. Create a Pod and Service:

kubectl apply -f debug/debug-pod.yaml
kubectl apply -f debug/debug-svc.yaml

Run kubectl exec:

$ kubectl exec debug-pod -it -- sh
/ # uname -a
Linux debug-pod 6.12.41-1-lts #1 SMP PREEMPT_DYNAMIC Fri, 01 Aug 2025 20:42:03 +0000 x86_64 GNU/Linux
/ # ps x
PID   USER     TIME  COMMAND
    1 root      0:00 sh -c echo "<h1>Debug Pod Web Server</h1><p>Hostname: $(hostname)</p><p>Time: $(date)</p>" > /t
    9 root      0:00 httpd -f -p 8080 -h /tmp
   17 root      0:00 sh
   24 root      0:00 ps x

Run kubectl port-forward:

kubectl port-forward svc/debug-service 8080:8080

Connect the the service:

$ curl localhost:8080
<h1>Debug Pod Web Server</h1><p>Hostname: debug-pod</p><p>Time: Wed Aug 20 02:15:00 UTC 2025</p>

Test Vault OIDC integration:

export VAULT_ADDR=https://vault.yourdomain.com
vault login -method=oidc
vault kv get -mount=secret -field=password postgres/admin

Conclusion

This guide has demonstrated how to build an internet-accessible Kubernetes home lab secured with Cloudflare Tunnel and OIDC authentication. The resulting infrastructure provides a cost-effective, remotely accessible cluster suitable for both development and learning purposes.

Key Benefits of This Setup

  • Cost Efficiency: Eliminates expensive cloud service fees while maintaining professional-grade capabilities
  • Remote Accessibility: Full cluster access from any location via secure internet connection
  • Enterprise Security: OIDC authentication ensures robust access control
  • Infrastructure as Code: Automated deployment reduces complexity and ensures reproducibility
  • Learning Platform: Ideal environment for Kubernetes experimentation and skill development

Resources


This content originally appeared on DEV Community and was authored by Buun ch.