In this tutorial you’ll manually install a minimal k3s Kubernetes cluster on three Ubuntu 22.04 VMs provisioned by Vagrant/VirtualBox. We’ll keep it simple, explain every config line, and finish with a working Ingress so you can expose apps cleanly.
What you’ll build
Here are instructions on how to create a 3-Node Vagrant on VirtualBox Lab (with Static Bridged IPs).
- 1× control-plane node:
k3s-master(192.168.230.10) - 2× workers:
k3s-worker-1(192.168.230.11),k3s-worker-2(192.168.230.12) - Networking over the host-only VirtualBox subnet
192.168.230.0/24 - Built-in Traefik Ingress for HTTP traffic
Prerequisites
- Vagrant and VirtualBox installed on your laptop/workstation.
- Base box:
generic/ubuntu2204. - Three VMs already created by your Vagrantfile:
config.vm.box = "generic/ubuntu2204" nodes = [ { name: "k3s-master", hostname: "k3s-master", ip: "192.168.230.10", mem: 3072, cpus: 2 }, { name: "k3s-worker-1", hostname: "k3s-worker-1", ip: "192.168.230.11", mem: 2048, cpus: 2 }, { name: "k3s-worker-2", hostname: "k3s-worker-2", ip: "192.168.230.12", mem: 2048, cpus: 2 }, ]
Why these specs? k3s is lightweight, so 2 CPUs / 2–3 GB RAM per node is enough for a demo cluster.
Step 1 — SSH into the VMs
# from your host machine vagrant ssh k3s-master vagrant ssh k3s-worker-1 vagrant ssh k3s-worker-2
What this does: Vagrant injects a default SSH key and user vagrant, making access fast and password-less.
Step 2 — Base OS prep (run on every node)
# Update packages and install useful tools sudo apt-get update -y && sudo apt-get upgrade -y sudo apt-get install -y curl vim jq apt-transport-https ca-certificates
Make sure hostnames and /etc/hosts are consistent
echo "k3s-master" | sudo tee /etc/hostname # on master; change on workers accordingly
sudo hostname -F /etc/hostname
# Add all nodes to hosts file for easy name resolution
cat <<'EOF' | sudo tee -a /etc/hosts
192.168.230.10 k3s-master
192.168.230.11 k3s-worker-1
192.168.230.12 k3s-worker-2
EOF
# Disable swap (required by Kubernetes)
sudo swapoff -a
sudo sed -ri 's/^([^#].*\sswap\s)/#\1/g' /etc/fstab
# Load kernel modules needed for container networking & iptables
echo -e "br_netfilter\noverlay" | sudo tee /etc/modules-load.d/k8s.conf
sudo modprobe br_netfilter
sudo modprobe overlay
# Sysctl: allow iptables to see bridged traffic and enable IPv4 forwarding
cat <<'EOF' | sudo tee /etc/sysctl.d/99-k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
sudo sysctl --system
# Optional: simplify first boot by disabling UFW while we test
sudo ufw disable || true
Line-by-line reasoning:
swapoff+fstabedit: the kubelet (and thus k3s) expects no swap to avoid unpredictable memory pressure behavior.br_netfilter+ sysctl: ensures Kubernetes CNI can program iptables for bridged pods.ip_forward=1: allows the node to route packets for pods/services.
Step 3 — Install the k3s server (control-plane)
On k3s-master only:
# Find which interface carries 192.168.230.10 (e.g., eth1 or ens33) ip -br addr
Replace IFACE below with your actual interface name
export IFACE=ens33 # example value
Install k3s server with explicit networking hints
curl -sfL https://get.k3s.io
|
INSTALL_K3S_EXEC="
--node-ip 192.168.230.10
--node-external-ip 192.168.230.10
--flannel-iface ${IFACE}
--tls-san 192.168.230.10
--tls-san k3s-master
--write-kubeconfig-mode 644"
sh -
What each flag means:
--node-ip/--node-external-ip: pins the node identity to your host-only IP (not the NAT IP).--flannel-iface: tells flannel which NIC to use for encapsulation, avoiding the NAT interface.--tls-san: adds the IP/hostname to the API server certificate so yourkubectlcan connect via 192.168.230.10 ork3s-masterwithout TLS errors.--write-kubeconfig-mode 644: makes/etc/rancher/k3s/k3s.yamlreadable without sudo (useful in labs).
Verify:
sudo systemctl status k3s --no-pager kubectl get nodes -o wide
Grab the join token for workers:
sudo cat /var/lib/rancher/k3s/server/node-token
Step 4 — Join the workers
On each worker, discover the interface name (must be the one with 192.168.230.11/12), then run:
# Example for worker-1 (192.168.230.11) export IFACE=ens33 # set to your interface export K3S_URL="https://192.168.230.10:6443" export K3S_TOKEN="<PASTE_TOKEN_FROM_MASTER>"
curl -sfL https://get.k3s.io
|
K3S_URL=$K3S_URL
K3S_TOKEN=$K3S_TOKEN
INSTALL_K3S_EXEC="
--node-ip 192.168.230.11
--node-external-ip 192.168.230.11
--flannel-iface ${IFACE}"
sh -
Example for worker-2 (192.168.230.12) — just change the IP:
--node-ip 192.168.230.12 --node-external-ip 192.168.230.12
Back on the master:
kubectl get nodes -o wide # Expect 3 nodes in Ready state
Step 5 — First deployment and the “why is EXTERNAL-IP pending?” moment
Create a tiny NGINX app:
kubectl create deployment hello --image=nginx --port=80 kubectl expose deployment hello --type=ClusterIP --port=80 kubectl get deploy,svc,pods -o wide
Why ClusterIP and not LoadBalancer? k3s ships with Traefik as an Ingress controller. On lab clusters (and especially with multiple services wanting port 80), it’s cleaner to use an Ingress instead of competing for hostPort:80 with the built-in ServiceLB. You can still use LoadBalancer later (e.g., via MetalLB), but Ingress is the simplest path today.
Step 6 — Publish your app using Traefik Ingress
Create an Ingress that routes hello.local to the hello service:
cat <<'EOF' | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello annotations: traefik.ingress.kubernetes.io/router.entrypoints: web spec: rules: - host: hello.local http: paths: - path: / pathType: Prefix backend: service: name: hello port: number: 80 EOF
Line-by-line:
apiVersion/kind: standard Kubernetes Ingress resource.traefik.ingress.kubernetes.io/router.entrypoints: web: tells Traefik to use the HTTP (port 80) entrypoint.host: hello.local: the virtual host you’ll use in your browser.backend.service.name/port: targets thehelloClusterIP service on port 80.
Add a host entry on your host machine so the name resolves to any cluster node:
# /etc/hosts (Linux/macOS) or C:\Windows\System32\drivers\etc\hosts (Windows) 192.168.230.10 hello.local
Test:
curl -H "Host: hello.local" http://192.168.230.10 # or open http://hello.local in your browser
Optional — Expose via LoadBalancer (without clashing with Traefik)
If you want to try the built-in ServiceLB, don’t collide with Traefik on port 80. Use a different service port, e.g. 8081:
kubectl delete svc hello kubectl expose deployment hello --type=LoadBalancer --name=hello --port=8081 --target-port=80 kubectl get svc hello -o wide
Why this works: k3s ServiceLB (klipper-lb) allocates hostPort:<servicePort> on every node. Traefik already uses 80; by choosing 8081 you avoid the conflict and the svclb-hello-* pods can start.
Step 7 — Copy kubeconfig to your host (nice to have)
# from your host (replace user/host if needed) scp vagrant@192.168.230.10:/etc/rancher/k3s/k3s.yaml ~/.kube/config
Update the API endpoint so kubectl talks to the master IP, not 127.0.0.1
sed -i 's#https://127.0.0.1:6443#https://192.168.230.10:6443#g
' ~/.kube/config
kubectl cluster-info
kubectl get nodes
Reasoning: the default kubeconfig points to localhost on the master. Rewriting the server field lets your host talk to the API server directly.
Troubleshooting cheatsheet
- Nodes NotReady: verify
swapoff, sysctl flags, and that--flannel-ifacepoints to the 192.168.230.x interface. - Pods stuck Pending: check if your nodes have enough CPU/RAM; look at
kubectl describe pod <name>and events. - LoadBalancer EXTERNAL-IP pending: if using ServiceLB, avoid port 80 conflicts or consider MetalLB for a dedicated IP pool.
- No Ingress response: ensure the Ingress host is in your local
/etc/hosts; confirm Traefik pods are Running:kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik -o wide - Firewall: for first runs, keep UFW off; later allow TCP 6443 (API), TCP 10250 (kubelet), UDP 8472 (VXLAN), and full node-to-node traffic.
Clean uninstall (if you want to start over)
# On the master: sudo /usr/local/bin/k3s-uninstall.sh
On each worker:
sudo /usr/local/bin/k3s-agent-uninstall.sh
What’s next?
- Add TLS with Let’s Encrypt on Traefik and a proper domain.
- Install a metrics stack (Prometheus + Grafana) and a log stack (Loki + Promtail).
- Automate with Ansible later; for now you understand every moving piece.
You now have a functional k3s cluster on Vagrant/VirtualBox with clean, explained configs. Happy shipping! 🚀