If you want a reliable Linux server for production or lab use, you should start with a solid foundation: correct firmware mode (UEFI), a clean partition layout, mirrored disks (RAID 1) and a secure basic configuration. In this guide we’ll walk through the full process step by step, using common Linux tools and concepts that work across most modern distributions (Debian, Ubuntu, Rocky, Alma, etc.).
1. Before You Start: Plan the Server
A good installation starts before you boot from ISO. Take a moment to think about:
- Use case: Will this be a web server, database server, file server, or “general purpose” box?
- Disks: Do you have 2 identical disks for RAID 1 (mirror)? Are they SSDs or HDDs?
- Firmware: Does the server use UEFI or legacy BIOS? (New hardware is almost always UEFI.)
- Network: Static IP or DHCP? One or more network interfaces?
In this article we’ll assume:
- Two identical disks:
/dev/sdaand/dev/sdb - UEFI firmware
- Linux distribution with a standard installer (e.g. Debian/Ubuntu server)
- You want a simple and robust setup using software RAID 1 with
mdadm
2. UEFI vs Legacy BIOS: Why It Matters
Modern servers usually boot using UEFI instead of legacy BIOS. The installation steps are similar, but UEFI has a few important differences:
- Disks are usually partitioned with GPT (GUID Partition Table), not MBR.
- You need a small EFI System Partition (ESP), typically 512–1024 MB, formatted as FAT32.
- The bootloader (like GRUB) stores its files in the ESP, under a directory such as
/EFI/debianor/EFI/ubuntu.
Before installing, enter the server’s firmware setup (often via Del, F2, F10, or F12) and:
- Confirm UEFI mode is enabled. If there is a “Legacy/CSM” option, disable it or set the boot mode to “UEFI only”.
- Secure Boot: for many server distributions, you can keep it enabled, but if you run into bootloader issues during installation, temporarily disable Secure Boot.
- Make sure your install media (USB/ISO) is listed under UEFI boot options.
3. Define a Practical Partitioning Scheme
With two disks in RAID 1, you want a layout that balances simplicity, reliability and troubleshooting. A common and proven pattern:
- UEFI System Partition (ESP) on each disk (or on RAID 1 if your distro supports it cleanly)
- RAID 1 for the main Linux filesystem(s)
Here is a simple example for each disk (/dev/sda and /dev/sdb):
- Partition 1: EFI System Partition — 512 MB, FAT32
- Partition 2: Linux RAID member — remaining space
On top of the RAID device, you can choose:
- Single root filesystem: one big
/(plus swap file) — simple and good for many use cases. - Separate partitions (e.g.
/,/var,/home) — useful for servers with heavy logs or databases.
For a first server, a good trade-off is:
/(root) on RAID 1- Swap file inside
/
We will create:
/dev/sda1– EFI System Partition/dev/sda2– RAID member/dev/sdb1– EFI System Partition/dev/sdb2– RAID member/dev/md0– RAID 1 array built from/dev/sda2+/dev/sdb2
4. Creating Partitions in the Installer (or Manually)
Most server installers (Debian, Ubuntu, Rocky, Alma, etc.) allow you to choose:
- Manual partitioning
- Setting partition type as “physical volume for RAID”
- Creating software RAID volumes
The exact UI differs, but the logic is the same. If you prefer manual CLI partitioning (for learning or advanced setups), you can use parted or gdisk. Here’s a conceptual example using parted (run from a live environment or rescue shell):
parted /dev/sda (parted)
mklabel gpt (parted)
mkpart ESP fat32 1MiB 513MiB (parted) set 1 esp on (parted)
mkpart primary 513MiB 100% (parted)
quit
parted /dev/sdb (parted)
mklabel gpt (parted)
mkpart ESP fat32 1MiB 513MiB (parted) set 1 esp on (parted)
mkpart primary 513MiB 100% (parted) quit
After this you will have:
/dev/sda1,/dev/sda2/dev/sdb1,/dev/sdb2
5. Building RAID 1 with mdadm
If your installer has a “Create MD device” or “software RAID” option, you can build RAID directly in the installer. If not, or if you want to understand the manual process, you can create the RAID array with mdadm.
Create a RAID 1 array from /dev/sda2 and /dev/sdb2:
mdadm --create /dev/md0 \ --level=1 \ --raid-devices=2 \ /dev/sda2 /dev/sdb2
Check the status:
cat /proc/mdstat
You should see something like:
md0 : active raid1 sda2[0] sdb2[1] <size> blocks super 1.2 [2/2] [UU]
Next, create a filesystem on the RAID array, for example ext4:
mkfs.ext4 /dev/md0
During an installation using a full-screen installer, you typically won’t enter these commands manually, but they reflect what the installer does behind the scenes.
6. Mount Points and Filesystems
Now we decide where to mount each piece:
- ESP →
/boot/efi(non-RAID, typically you use one of the ESPs, or sync them later) - RAID 1 array (
/dev/md0) →/
If you want separate partitions (for example, /var), you can create additional logical volumes via LVM on top of /dev/md0, but for a basic server you can start with a single root filesystem.
In the installer, you would assign:
/dev/md0→ mount point/, filesystemext4/dev/sda1→ mount point/boot/efi, filesystemvfat
On first boot, your /etc/fstab will look roughly like:
/dev/md0 / ext4 defaults 0 1
/dev/sda1
/boot/efi vfat umask=0077 0 1
Many distributions now use UUIDs instead of device names. That is recommended because it’s more stable. You can get UUIDs with:
blkid
7. Installing the Bootloader in UEFI Mode
In UEFI mode, your installer will usually:
- Install GRUB or another bootloader.
- Create a directory in the ESP (e.g.
/boot/efi/EFI/debian). - Register a boot entry in the UEFI firmware (using
efibootmgrbehind the scenes).
If you ever need to reinstall the bootloader from a chroot, it will look somewhat like:
mount /dev/md0 /mnt
mount /dev/sda1 /mnt/boot/efi
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
chroot /mnt # inside chroot grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=linux update-grub
The exact commands may differ slightly between distributions, but the concept is the same: bootloader files live in the ESP, and configuration lives in /boot/grub on the RAID filesystem.
8. First Boot: Basic System Setup
After the installation completes and the system boots from the RAID 1 array, it’s time for the basic server setup that turns a “fresh OS” into a usable, secure machine.
8.1 Update the system
Always start with package updates:
sudo apt update && sudo apt upgrade -y
(or the equivalent for your distribution, such as dnf update on Rocky/AlmaLinux).
8.2 Set hostname and /etc/hosts
Choose a clear hostname, e.g. web01 or db01. On Debian/Ubuntu you can use:
sudo hostnamectl set-hostname web01
Then verify /etc/hosts contains entries like:
127.0.0.1 localhost
127.0.1.1 web01
8.3 Configure networking (static IP)
Servers usually use a static IP. On modern Debian/Ubuntu, networking is often managed by netplan. Example configuration file /etc/netplan/01-netcfg.yaml:
network:
version: 2
renderer: networkd
ethernets: eno1:
addresses: - 192.168.10.10/24
gateway4: 192.168.10.1
nameservers:
addresses: - 192.168.10.1 - 1.1.1.1
Apply the configuration:
sudo netplan apply
8.4 Create a non-root admin user
Running everything as root is dangerous. Create an unprivileged user and grant sudo:
adduser adminuser
usermod -aG sudo adminuser
Log in as this user for everyday administration:
ssh adminuser@your-server
9. Secure Remote Access and Firewall
9.1 SSH hardening
Edit the SSH daemon configuration, usually /etc/ssh/sshd_config:
sudo nano /etc/ssh/sshd_config
Recommended changes:
- Disable root login:
PermitRootLogin no - Disable password logins once you have SSH keys configured:
PasswordAuthentication no - Ensure protocol and other basics are set:
Protocol 2 X11Forwarding no
After editing, restart SSH:
sudo systemctl restart ssh
9.2 Firewall basics
On Debian/Ubuntu, the simple and effective choice is ufw:
sudo apt install ufw -y # Allow SSH
sudo ufw allow OpenSSH # Enable firewall
sudo ufw enable # Check rules
sudo ufw status verbose
If you later add a web server:
sudo ufw allow "Nginx Full"
(or the appropriate profile for Apache or other services).
10. RAID Monitoring and Health Checks
RAID 1 protects you against a single disk failure, but only if you notice when a disk dies and replace it in time.
10.1 Install mdadm tools
On Debian/Ubuntu:
sudo apt install mdadm -y
Confirm status:
cat /proc/mdstat
sudo mdadm --detail /dev/md0
10.2 Configure email alerts (optional but recommended)
During mdadm package configuration, you can set an email address to receive alerts. Alternatively, edit /etc/mdadm/mdadm.conf and ensure a line like:
MAILADDR admin@example.com
Then update:
sudo update-initramfs -u
11. Basic Quality-of-Life Tweaks
At this point, the server is installed, booting via UEFI, running on RAID 1, and reachable over the network. A few extra tweaks can save you time later:
11.1 Enable automatic security updates
On Debian/Ubuntu:
sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure unattended-upgrades
This enables automatic installation of security patches. You can tune the behavior in:
/etc/apt/apt.conf.d/50unattended-upgrades
11.2 Install basic tools
A minimal server image often lacks some everyday utilities. Consider:
sudo apt install htop vim git curl wget tmux lsof -y
Adjust this list to your preferences and your organization’s standards.
11.3 Time synchronization
Correct time is very important, especially for logs and security tools. Make sure NTP is working:
timedatectl status
If NTP is not active, enable it:
sudo timedatectl set-ntp true
12. Summary: A Solid Foundation for Your Linux Server
By following these steps, you’ve built a Linux server with:
- UEFI boot and GPT partitioning
- RAID 1 mirroring for disk redundancy using
mdadm - A clean and maintainable partition layout
- Basic networking, SSH hardening and a firewall
- Initial monitoring and maintenance for RAID and system updates
From here, you can install your application stack: web server (Nginx/Apache), database server (PostgreSQL/MariaDB), container runtime (Docker), monitoring (Prometheus, Grafana), or any other services your infrastructure needs. Because you invested time in a clean installation with UEFI, RAID 1 and a secure base configuration, your future troubleshooting and maintenance will be much easier.