Goal: Install Helm on the k3s master, deploy NGINX via Helm, expose it through Traefik Ingress, and reach it from the control host. We run kubectl only on the master; the control host uses Ansible.
Reference environment: Kubuntu 25 host (fullstacklab.site) with SSH user stackadmin; k3s master 192.168.56.10. Namespace apps prepared on Day 5.
Step 1 — Open UFW 80/443 (Ingress)
---
# ansible/ufw_ingress_ports.yml
- name: Open HTTP/HTTPS for ingress
hosts: k3s_master:k3s_workers
become: true
vars:
ufw_rules:
- { port: "80", proto: "tcp" }
- { port: "443", proto: "tcp" }
tasks:
- name: Ensure UFW installed
ansible.builtin.package:
name: ufw
state: present
- name: Allow SSH first (safety)
community.general.ufw:
rule: allow
port: "22"
proto: tcp
- name: Allow HTTP/HTTPS
community.general.ufw:
rule: allow
port: "{{ item.port }}"
proto: "{{ item.proto }}"
loop: "{{ ufw_rules }}"
- name: Enable UFW
community.general.ufw:
state: enabled
logging: "on"
Step 2 — Install Helm on master & deploy NGINX with Traefik Ingress
We pin KUBECONFIG to /etc/rancher/k3s/k3s.yaml because the play runs with become: true. Optional vars let you override the image repo/tag in case the default tag goes missing.
---
# ansible/helm_on_master_and_nginx.yml
- name: Install Helm on k3s-master and deploy NGINX with Traefik Ingress
hosts: k3s_master
become: true
vars:
helm_install_script: /tmp/get_helm.sh
release_name: "web"
namespace: "apps"
ingress_host: "nginx.apps.lan"
kubeconfig: "/etc/rancher/k3s/k3s.yaml"
# Optional overrides (uncomment or pass with -e):
# extra_image_repo: "bitnamilegacy/nginx"
# extra_image_tag: "1.29.1-debian-12-r0"
# extra_set_image_tag: "latest"
environment:
KUBECONFIG: "{{ kubeconfig }}"
tasks:
- name: Ensure curl and tar present
ansible.builtin.package:
name: [curl, tar]
state: present
- name: Fetch Helm install script (official)
ansible.builtin.get_url:
url: https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
dest: "{{ helm_install_script }}"
mode: "0755"
- name: Install Helm 3 (idempotent)
ansible.builtin.command: bash {{ helm_install_script }}
args:
creates: /usr/local/bin/helm
- name: Ensure namespace exists (apply)
ansible.builtin.command: kubectl apply -f -
args:
stdin: |
apiVersion: v1
kind: Namespace
metadata:
name: {{ namespace }}
- name: Add bitnami repo (idempotent) and update
ansible.builtin.shell: |
helm repo add bitnami https://charts.bitnami.com/bitnami 2>/dev/null || true
helm repo update
args:
executable: /bin/bash
- name: Install/upgrade NGINX with Traefik ingress
ansible.builtin.command: >
helm upgrade --install {{ release_name }} bitnami/nginx -n {{ namespace }}
--create-namespace
--set service.type=ClusterIP
--set ingress.enabled=true
--set ingress.ingressClassName=traefik
--set ingress.hostname={{ ingress_host }}
{% raw %}{% if extra_image_repo is defined %}{% endraw %} --set image.repository={{ extra_image_repo }} {% raw %}{% endif %}{% endraw %}
{% raw %}{% if extra_image_tag is defined %}{% endraw %} --set image.tag={{ extra_image_tag }} {% raw %}{% endif %}{% endraw %}
{% raw %}{% if extra_set_image_tag is defined %}{% endraw %} --set image.tag={{ extra_set_image_tag }} {% raw %}{% endif %}{% endraw %}
- name: Wait for NGINX deployment to be ready
ansible.builtin.command: kubectl -n {{ namespace }} rollout status deploy/{{ release_name }}-nginx --timeout=180s
- name: Show svc/ingress
ansible.builtin.shell: kubectl -n {{ namespace }} get deploy,svc,ingress -o wide
args:
executable: /bin/bash
Step 3 — Make the hostname resolvable on the control host
Append nginx.apps.lan to /etc/hosts pointing to the master’s IP.
---
# ansible/hosts_entry_local.yml
- name: Add nginx.apps.lan host mapping on control host
hosts: localhost
connection: local
gather_facts: false
vars:
ingress_ip: "192.168.56.10"
host_entry: "nginx.apps.lan"
tasks:
- name: Ensure /etc/hosts entry exists
become: true
ansible.builtin.lineinfile:
path: /etc/hosts
create: true
state: present
regexp: '^\S+\s+{{ host_entry }}\s*$'
line: "{{ ingress_ip }} {{ host_entry }}"
Validation
# From the control host
curl -I http://nginx.apps.lan/
curl http://nginx.apps.lan/ | head
# (Optional) On the master
ssh stackadmin@192.168.56.10 'kubectl -n apps get deploy,svc,ingress -o wide'
Troubleshooting (all the issues we hit)
- Namespace already exists:
kubectl create ns appsfails with AlreadyExists. Use idempotent apply:kubectl apply -f - <<EOF apiVersion: v1 kind: Namespace metadata: name: apps EOF - Helm: Kubernetes cluster unreachable (http://127.0.0.1:8080): play runs with
become: trueand root has no kubeconfig. Addenvironment: KUBECONFIG=/etc/rancher/k3s/k3s.yamlto the play (see Step 2). - Rollout timeout &
Init:ImagePullBackOff: Bitnami removed some versioned Debian tags (e.g.,bitnami/nginx:1.29.1-debian-12-r0). Fix by overriding image:
Use the extra vars in the playbook to set these.# Option 1 (simple): latest --set image.tag=latest # Option 2 (pinned legacy) --set image.repository=bitnamilegacy/nginx --set image.tag=1.29.1-debian-12-r0 - Workers NotReady & duplicate INTERNAL-IP on VMware: cloned VMs often share the same MAC or
machine-id, so vmnet8 DHCP hands out the same IP. Fix:- Generate a unique MAC in VMware settings for each VM.
- Regenerate
machine-idon each VM:sudo rm -f /etc/machine-id sudo systemd-machine-id-setup sudo rm -f /var/lib/dbus/machine-id sudo ln -s /etc/machine-id /var/lib/dbus/machine-id sudo netplan apply - (Optional) Pin kubelet IP explicitly:
# master echo "K3S_NODE_IP=<MASTER-IP>" | sudo tee /etc/rancher/k3s/k3s.service.env sudo systemctl restart k3s # worker echo "K3S_NODE_IP=<WORKER-IP>" | sudo tee /etc/rancher/k3s/k3s-agent.service.env sudo systemctl restart k3s-agent
- DNS/egress to registries: if pulls fail with name resolution or TLS errors, (re)enable
systemd-resolvedon nodes:
Then retry:sudo bash -c 'cat >/etc/systemd/resolved.conf <<CFG [Resolve] DNS=1.1.1.1 8.8.8.8 FallbackDNS=9.9.9.9 1.0.0.1 CFG' sudo rm -f /etc/resolv.conf sudo ln -s /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf sudo systemctl enable --now systemd-resolved sudo systemctl restart systemd-resolvedkubectl -n apps rollout restart deploy/web-nginx. - Ansible on localhost asks for sudo password during facts: either run with
-Kor setgather_facts: falseand usebecome: trueonly on the specific task writing to/etc/hosts. - Undefined variable in hosts-entry playbook: define
ingress_ipandhost_entryinvars:or pass them via-e.ansible-playbook ansible/hosts_entry_local.yml -K \ -e "ingress_ip=192.168.56.10 host_entry=nginx.apps.lan" - Viewing pod Events reliably: some shells choke on
sed -n "/Events:/,$p". Use:kubectl -n apps describe pod -l app.kubernetes.io/name=nginx | awk '/^Events:/,0' kubectl -n apps get events --sort-by=.lastTimestamp | tail -n 25
What’s next: add TLS (Let’s Encrypt via Traefik) or practice Helm upgrade/rollback flows.