HomeLab – ConfigMap Index, Helm Upgrade & Rollback – Day 9

Goal: Serve a custom index.html via ConfigMap, mount it into Bitnami nginx, validate over HTTPS through Traefik, then practice Helm upgrade and rollback. Control host: fullstacklab.site (user stackadmin); kubectl and helm run on k3s-master.

Day 9 — ConfigMap Index, Helm Upgrade & Rollback
ConfigMap → NGINX mount → Traefik TLS → Helm upgrade/rollback

Step 1 — Ensure host mapping for nginx.apps.lan

We map the hostname to our master node so local requests hit Traefik correctly.

---
# ansible/day9_configmap_index_and_rollback.yml (Play 0)
- name: Ensure host mapping for nginx.apps.lan on control host
  hosts: localhost
  connection: local
  gather_facts: false
  vars:
    master_ip: "172.16.9.131"         # adjust if needed
    ingress_host: "nginx.apps.lan"
  tasks:
    - name: Map {{ ingress_host }} -> {{ master_ip }} in /etc/hosts
      become: true
      ansible.builtin.lineinfile:
        path: /etc/hosts
        create: true
        state: present
        regexp: '^\S+\s+{{ ingress_host | regex_escape() }}\s*$'
        line: "{{ master_ip }} {{ ingress_host }}"

Step 2 — Create ConfigMap and mount it into Bitnami NGINX

We template a friendly homepage and mount it as /opt/bitnami/nginx/html/index.html using extraVolumes and extraVolumeMounts. We also pin image.tag=latest to avoid legacy tag issues, keep TLS enabled and our Traefik middleware.

---
# ansible/day9_configmap_index_and_rollback.yml (Play 1)
- name: Deploy custom index via ConfigMap and mount into Bitnami NGINX
  hosts: k3s_master
  become: true
  vars:
    kubeconfig: "/etc/rancher/k3s/k3s.yaml"
    namespace: "apps"
    release_name: "web"
    ingress_host: "nginx.apps.lan"
    tls_secret: "nginx-tls"
    image_tag: "latest"
    replicas: 2

    cm_name: "web-index"
    index_version: "v1"     # change to v2 for the second upgrade
    index_title: "K3s Demo — {{ index_version }}"
    index_message: "Hello from Day 9 ({{ index_version }})"
    values_file: "/tmp/web-values-day9.yaml"
  environment:
    KUBECONFIG: "{{ kubeconfig }}"

  tasks:
    - name: Ensure namespace exists
      ansible.builtin.command: kubectl apply -f -
      args:
        stdin: |
          apiVersion: v1
          kind: Namespace
          metadata:
            name: {{ namespace }}

    - name: Create/Update ConfigMap with custom index.html ({{ index_version }})
      ansible.builtin.command: kubectl -n {{ namespace }} apply -f -
      args:
        stdin: |
          apiVersion: v1
          kind: ConfigMap
          metadata:
            name: {{ cm_name }}
          data:
            index.html: |
              <!doctype html>
              <html lang="en">
              <head>
                <meta charset="utf-8">
                <meta name="viewport" content="width=device-width, initial-scale=1">
                <title>{{ index_title }}</title>
                <style>
                  body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Ubuntu, Cantarell, "Helvetica Neue", Arial, "Noto Sans", sans-serif;
                         background: #0e1116; color: #e6edf3; margin: 0; display: grid; min-height: 100vh; place-items: center; }
                  .card { max-width: 900px; padding: 40px; background: #111827; border-radius: 16px; box-shadow: 0 10px 30px rgba(0,0,0,.4); }
                  h1 { margin: 0 0 12px; font-size: 2rem; }
                  p { margin: 8px 0; line-height: 1.5; color: #cbd5e1; }
                  code { background: #0b1220; padding: 2px 6px; border-radius: 6px; }
                  .badge { display: inline-block; padding: 4px 10px; border-radius: 999px; background: #0b4; color: #041; font-weight: 700; }
                </style>
              </head>
              <body>
                <div class="card">
                  <h1>{{ index_title }}</h1>
                  <p>{{ index_message }}</p>
                  <p>Served via <code>ConfigMap</code> → mounted into <code>/opt/bitnami/nginx/html/index.html</code>.</p>
                  <p>Ingress host: <code>{{ ingress_host }}</code> • Namespace: <code>{{ namespace }}</code></p>
                  <p class="badge">Version: {{ index_version }}</p>
                </div>
              </body>
              </html>

    - name: Render Helm values (mount ConfigMap as index.html)
      ansible.builtin.copy:
        dest: "{{ values_file }}"
        mode: "0644"
        content: |
          replicaCount: {{ replicas }}
          image:
            tag: {{ image_tag }}
          service:
            type: ClusterIP
          ingress:
            enabled: true
            ingressClassName: traefik
            hostname: {{ ingress_host }}
            tls: true
            extraTls:
              - hosts: [{{ ingress_host }}]
                secretName: {{ tls_secret }}
            annotations:
              "traefik.ingress.kubernetes.io/router.middlewares": "{{ namespace }}-https-redirect@kubernetescrd"

          # Mount ConfigMap to replace default index.html
          extraVolumes:
            - name: web-index
              configMap:
                name: {{ cm_name }}
          extraVolumeMounts:
            - name: web-index
              mountPath: /opt/bitnami/nginx/html/index.html
              subPath: index.html
              readOnly: true

    - name: Ensure Bitnami repo exists
      ansible.builtin.command: helm repo add bitnami https://charts.bitnami.com/bitnami
      register: add_repo
      failed_when: add_repo.rc not in [0,1]
      changed_when: add_repo.rc == 0

    - name: Helm repo update
      ansible.builtin.command: helm repo update

    - name: Helm upgrade --install with values (pin image.tag=latest)
      ansible.builtin.command: >
        helm upgrade --install {{ release_name }} bitnami/nginx -n {{ namespace }}
        -f {{ values_file }}

    - name: Wait for deployment to become Available
      ansible.builtin.command: kubectl -n {{ namespace }} rollout status deploy/{{ release_name }}-nginx --timeout=300s

    - name: Show deploy/svc/ing
      ansible.builtin.command: kubectl -n {{ namespace }} get deploy,svc,ingress -o wide
      changed_when: false

Step 3 — Validate over HTTPS

We verify from the control host that HTTPS is trusted (mkcert CA installed on Day 8) and that the homepage shows the version badge.

---
# ansible/https_validate_from_host.yml
- name: Validate HTTPS from control host
  hosts: localhost
  connection: local
  gather_facts: false
  vars:
    url: "https://nginx.apps.lan/"
  tasks:
    - name: HEAD request
      ansible.builtin.command: curl -I {{ url }}
      register: head
      changed_when: false
    - name: Show headers
      ansible.builtin.debug:
        var: head.stdout

    - name: Body snippet (first 12 lines)
      ansible.builtin.shell: curl -s {{ url }} | sed -n '1,12p'
      register: body
      changed_when: false
    - name: Show body
      ansible.builtin.debug:
        var: body.stdout

Step 4 — Upgrade to v2 and rollback

We switch index_version to v2, re-run the playbook (Helm upgrade), and then practice a rollback to a chosen revision.

# Upgrade content to v2
ansible-playbook -i ansible/inventory/hosts.ini ansible/day9_configmap_index_and_rollback.yml \
  -e index_version=v2 -e index_title="K3s Demo — v2" -e index_message="Hello from Day 9 (v2)" -K

# Show Helm history and pick a revision
ssh stackadmin@fullstacklab.site 'ssh k3s-master "helm -n apps history web"'

# Roll back to a specific revision (example: 3)
ansible-playbook -i ansible/inventory/hosts.ini ansible/day9_configmap_index_and_rollback.yml \
  -e do_rollback=true -e target_revision=3 -K

Troubleshooting (what we fixed today)

  • Rollout timeout after rollback: “1 old replicas are pending termination…” – a stuck pod/ReplicaSet. Fix: force-delete pods (--force --grace-period=0), optionally scale deploy/web-nginx to 0 and back to the desired replicas, then wait again.
  • Init:ImagePullBackOff (legacy Bitnami tag): A rollback created RS with 1.29.1-debian-12-r0 which no longer pulls. Fix: pin image.tag=latest (or use image.repository=bitnamilegacy/nginx with a specific tag) and run helm upgrade --reuse-values.
  • helm: repo bitnami not found: Add the repo on k3s-master:
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo update
    Then retry upgrade.
  • jq not found during diagnostics: Either install jq on the master for JSON helpers, or use plain kubectl without JSON processing.
  • HTTPS trust: If curl complains about self-signed, ensure mkcert Root CA is installed on the control host (Day 8) and that the Ingress uses the correct TLS Secret (nginx-tls).

What’s next (Day 10): bootstrap Argo CD (GitOps). We’ll install Argo, expose it via Traefik TLS (argocd.apps.lan), and create our first Application wired to a Git repo so changes sync automatically.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.