How to Build a Kubernetes Cluster on Raspberry Pi with k3s

How to Build a Kubernetes Cluster on Raspberry Pi with k3s

k3s is a lightweight, CNCF-certified Kubernetes distribution built for resource-constrained environments like Raspberry Pi. In this guide, we will set up a multi-node cluster, deploy a sample application, and expose it as a service.


Prerequisites

  • At least 2 Raspberry Pi devices (Pi 4 with 2GB+ RAM recommended)
  • Raspberry Pi OS (64-bit) on each node
  • All Pis on the same local network with static IPs and SSH enabled

Example IPs: master 192.168.1.100, worker1 192.168.1.101, worker2 192.168.1.102.


Step 1: Prepare Each Node

On every Pi, update the system:

Bash
sudo apt update && sudo apt upgrade -y

Enable memory cgroups by editing /boot/firmware/cmdline.txt and appending to the end of the existing line:

Code
cgroup_memory=1 cgroup_enable=memory

Then reboot: sudo reboot


Step 2: Install k3s on the Master Node

SSH into your master node and run:

Bash
curl -sfL https://get.k3s.io | sh -

Wait about 30 seconds, then verify:

Bash
sudo kubectl get nodes

Step 3: Retrieve the Join Token

Worker nodes need a token to join the cluster:

Bash
sudo cat /var/lib/rancher/k3s/server/node-token

Copy the output for the next step.


Step 4: Join Worker Nodes

SSH into each worker node and run, replacing the token with your value:

Bash
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.1.100:6443 K3S_TOKEN=YOUR_TOKEN_HERE sh -

Verify all nodes from the master:

Bash
sudo kubectl get nodes
Code
NAME         STATUS   ROLES                  AGE     VERSION
pi-master    Ready    control-plane,master   5m      v1.31.4+k3s1
pi-worker1   Ready    <none>                 2m      v1.31.4+k3s1
pi-worker2   Ready    <none>                 1m      v1.31.4+k3s1

Step 5: Deploy a Sample Nginx Application

Create ~/nginx-deployment.yaml:

YAML
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          ports:
            - containerPort: 80

Apply it:

Bash
sudo kubectl apply -f ~/nginx-deployment.yaml

Step 6: Expose the Service

Expose nginx on a NodePort:

Bash
sudo kubectl expose deployment nginx-deployment --type=NodePort --port=80
sudo kubectl get services

Output shows a port like 80:31234/TCP. Access nginx at http://192.168.1.100:31234 from your browser. Any node IP works with the same port.

Check pod distribution:

Bash
sudo kubectl get pods -o wide

Troubleshooting

  • Node stuck in NotReady: Verify cgroups are enabled in /boot/firmware/cmdline.txt and that you rebooted.
  • Workers not joining: Check the token is correct and port 6443 is open. Test with curl -k https://192.168.1.100:6443.
  • Pods stuck in Pending: Run sudo kubectl describe pod followed by the pod name to check scheduling errors.
  • Permission denied on kubectl: Copy the kubeconfig: sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config && sudo chown $USER:$USER ~/.kube/config.
  • High memory usage: Each Pi needs at least 1GB of free RAM for stable operation.

Conclusion

You now have a functional Kubernetes cluster running on Raspberry Pi with k3s. From here, deploy more complex applications, set up Helm for package management, or add persistent storage to your cluster.