
Goal: prepare the host (Kubuntu 25), create a reusable base VM template (Ubuntu Server 24.04), clone a k3s-master
VM, and verify connectivity with Ansible.
Host context we’ll use throughout this series: hostname fullstacklab.site
, user stackadmin
Why this matters (short theory)
Before touching Kubernetes or SIEM, you need a repeatable lab you can destroy and rebuild in minutes. Today we set up:
- a clean host with admin tools,
- a base VM template with SSH and correct networking,
- a clone workflow (Snapshot → Full Clone),
- and a first Ansible ping to prove remote automation works.
Topology at a glance
Step 0 — Prepare the host (Kubuntu 25)
Run these once on your host:
sudo hostnamectl set-hostname fullstacklab.site
sudo apt update
sudo apt install -y openssh-client ansible curl jq git make python3-pip tree net-tools docker.io
sudo usermod -aG docker stackadmin
# SSH key for user 'stackadmin' if you don't have one yet:
sudo -u stackadmin bash -lc 'test -f ~/.ssh/id_ed25519 || ssh-keygen -t ed25519 -C "stackadmin@fullstacklab.site" -N "" -f ~/.ssh/id_ed25519'
Note: Docker is for later (Minikube and CI). The important bits today are SSH and Ansible.
Step 1 — VMware networking (Host-only + NAT)
- vmnet1 (Host-only): 192.168.56.0/24, DHCP OFF
- vmnet8 (NAT): DHCP ON (Internet egress only)
Use Virtual Network Editor (as root) to confirm these settings.
Step 2 — Create the Base Template VM (Ubuntu 24.04)
- New VM (EFI, 20 GB thin, 2 vCPU, 2 GB RAM) with two NICs:
ens33
→ vmnet1 (Host-only)ens34
→ vmnet8 (NAT)
- Install Ubuntu Server 24.04 (minimal) and OpenSSH server. Create user stackadmin with sudo.
- Configure Netplan for static IP on vmnet1:
# /etc/netplan/01-lab.yaml
network:
version: 2
ethernets:
ens33: # vmnet1 (Host-only)
addresses: [192.168.56.10/24]
ens34: # vmnet8 (NAT)
dhcp4: true
Apply & verify:
sudo netplan apply
ip -br addr
4. Add your public key to the VM (on the VM):
mkdir -p ~/.ssh && chmod 700 ~/.ssh
echo "<your_public_key_here>" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
5. Test from the host:
ssh stackadmin@192.168.56.10 'hostnamectl; ip -br addr'
6. Shut down the VM and create a snapshot named gold
.
Use a Full Clone from the gold
snapshot and boot it. Then set the hostname:
sudo hostnamectl set-hostname k3s-master
Step 4 — First Ansible “ping”
Edit your inventory (example assumes 192.168.56.10
):
# ansible/inventory/hosts.ini
[k3s_master]
k3s-master ansible_host=192.168.56.10 ansible_user=stackadmin
[all:vars]
ansible_become=true
Then:
# Without sudo (quickest test)
ansible -i ansible/inventory/hosts.ini k3s_master -m ping -e 'ansible_become=false'
# Or with sudo prompt when needed:
ansible -i ansible/inventory/hosts.ini k3s_master -m ping -b -K
If you prefer zero-prompt sudo in the lab, on the VM:
echo 'stackadmin ALL=(ALL) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/90-stackadmin-nopasswd >/dev/null
sudo chmod 0440 /etc/sudoers.d/90-stackadmin-nopasswd
sudo visudo -cf /etc/sudoers.d/90-stackadmin-nopasswd
Troubleshooting (teach yourself to prove root cause)
- SSH auth fails? Check
~/.ssh/authorized_keys
(600),sshd
status (systemctl status ssh
), firewall (sudo nft list ruleset
). - Wrong NIC names?
ip -br link
and adjust Netplan interfaces. - No route to Internet? Make sure ens34 (vmnet8) is DHCP.
- Ansible complains about host key? SSH once interactively to accept the fingerprint, or pass
-e 'ansible_ssh_common_args="-o StrictHostKeyChecking=no"'
.
What you learned today (beginner-friendly recap)
- The host isn’t your lab; it’s your control plane (Ansible, SSH key, tooling).
- A Base Template + Snapshot saves hours—clones are cheap, reproducible, and safe.
- A successful Ansible ping proves the “remote automation path” is alive.