Logging Basics : Log Levels, logrotate, Retention, and rsyslog Forwarding (TCP/TLS)

Linux logging is one of those “boring until it saves your weekend” topics. In a SOC lab, a production environment, or even a small home server setup, good logging is the difference between guessing and knowing. This article builds on the basics (/var/log, journald, syslog) and focuses on the practical admin layer: log levels, rotation, retention, and central forwarding with rsyslog over TCP/TLS.

Why this matters (and why “just keep logs” is not enough)

Logs are only useful if they are:

  • Searchable (you can find the signal fast)
  • Reliable (timestamps and sources make sense)
  • Retained long enough for incident response/audits
  • Centralized (so one compromised host doesn’t erase the trail)

That’s exactly what we’ll build here: a clean, repeatable pattern you can reuse across servers and labs.


1) Syslog “grammar”: facility + severity (priority)

When you see syslog rules like authpriv.* or *.err, it helps to know what they mean. Syslog messages are categorized by:

Severity (how bad is it?)

Severity is a number from 0 (worst) to 7 (least critical):

  • 0 emerg – system unusable (think: serious kernel/system failure)
  • 1 alert – immediate action required
  • 2 crit – critical conditions
  • 3 err – errors
  • 4 warning – warnings
  • 5 notice – normal but significant
  • 6 info – informational
  • 7 debug – verbose debugging

Facility (where it comes from)

Facility is the “bucket” for the source or subsystem:

  • auth, authpriv – authentication events (SSH, sudo, PAM)
  • kern – kernel
  • daemon – system daemons
  • mail – mail services
  • user – user-space messages
  • local0local7 – custom facilities (great for your own apps/services)

Priority = facility + severity. Rules in rsyslog match these and decide where messages go (file, remote server, drop, etc.).


2) Test your pipeline instantly with logger

Don’t wait for “real” errors just to confirm logging works. Use logger to generate messages on demand:

# Default facility/severity (usually user.notice)
logger "TEST default facility/severity"

# Authentication facility example
logger -p authpriv.notice "TEST authpriv.notice from $(hostname)"

# Custom facility you may use for apps
logger -p local0.err "TEST local0.err from $(hostname)"

Then verify where it landed:

# Journald view (systemd systems)
journalctl -n 50

# Classic syslog files (varies by distro)
sudo tail -n 50 /var/log/syslog    # Debian/Ubuntu
sudo tail -n 50 /var/log/messages  # RHEL-like

Pro tip: If you’re in a lab, always keep one terminal following logs:

journalctl -f

3) logrotate: controlling disk usage without losing evidence

Logs grow forever unless you manage them. That’s what logrotate does: rotate old logs, compress them, and keep a defined number of copies.

Where logrotate is configured

  • Main config: /etc/logrotate.conf
  • Per-service configs: /etc/logrotate.d/*

Key directives you should actually care about

  • daily / weekly – time-based rotation
  • size 100M – rotate when a log hits a size threshold
  • rotate 14 – how many rotated logs to keep
  • compress + delaycompress – save space without breaking tools that read the most recent rotated file
  • missingok, notifempty – don’t fail if the log is missing or empty
  • postrotate – reload the service so it reopens log files

Practical advice: In production, size-based rotation is often safer than only daily rotation. During incidents, logs can explode in minutes.

Dry-run and force-run (safe testing)

# Dry-run (shows what would happen)
sudo logrotate -d /etc/logrotate.conf

# Force rotation (use carefully)
sudo logrotate -f /etc/logrotate.conf

4) Retention: a simple model that scales from lab to SOC

Retention should match your goals (troubleshooting, incident response, compliance). A clean approach:

  • Hot (local, fast): 7–14 days on each server
  • Warm (central, searchable): 30–90 days on a log server/SIEM
  • Cold (archive): 6–12 months compressed on cheap storage

And one rule that’s non-negotiable: time synchronization. If servers disagree about time, your timeline becomes fiction.

# Check time sync status (systemd)
timedatectl status

# Chrony example
systemctl status chrony --no-pager

5) Central forwarding with rsyslog (TCP): the “good default”

Central forwarding solves two problems:

  • You can search across all systems from one place
  • A compromised host can’t easily erase its tracks (because logs already left the box)

5.1 Log server configuration (receiver)

Create a receiver config file on the log server:

# /etc/rsyslog.d/10-server.conf

module(load="imtcp")
input(type="imtcp" port="514")

# Store per-host, per-program
template(name="PerHostPerProgram" type="string"
  string="/var/log/remote/%HOSTNAME%/%PROGRAMNAME%.log")

*.* action(type="omfile" dynaFile="PerHostPerProgram")

Then prepare directories and restart rsyslog:

sudo mkdir -p /var/log/remote
sudo systemctl restart rsyslog

# Confirm listener
sudo ss -lntp | grep ':514'

5.2 Client configuration (sender)

On each client, add a forward rule:

# /etc/rsyslog.d/90-forward.conf

*.* action(type="omfwd" target="LOGSERVER_IP" port="514" protocol="tcp")

Restart and test:

sudo systemctl restart rsyslog
logger -p local0.info "hello from $(hostname) forwarding test"

On the server, search for newly created files:

sudo find /var/log/remote -type f -mmin -5 | tail
sudo tail -n 50 /var/log/remote/<hostname>/local0.log 2>/dev/null

6) Upgrading to TLS: when “TCP works” is not enough

Plain TCP syslog is easy, but it’s also plaintext. For real environments (or a serious lab), TLS gives you:

  • Confidentiality (logs aren’t readable on the wire)
  • Integrity (harder to tamper in transit)
  • Authentication (clients verify the server; optionally mTLS where the server verifies clients)

Common pattern:

  • Use a small internal CA
  • Issue a cert for the rsyslog server
  • Optionally issue client certs (mTLS) for stronger trust

Note: TLS setup is a slightly bigger topic (cert generation, peer validation, permissions). In the next article/lesson, you can implement a full mTLS config with gtls and strict PermittedPeer rules.


7) Filtering noise: keep the signal, control costs

A SOC-ready logging setup isn’t about collecting everything forever. It’s about collecting the right things reliably.

Examples of practical rsyslog filtering

Send only warning and above to central logging (reduces traffic and storage):

*.warning action(type="omfwd" target="LOGSERVER_IP" port="514" protocol="tcp")

Drop known “healthcheck noise” (use carefully):

:msg, contains, "healthcheck" stop

Separate auth logs (if your distro uses classic syslog files):

auth,authpriv.*    /var/log/auth.log

Rule of thumb: start by collecting more, then reduce noise once you understand patterns. Premature filtering can hide early compromise signals.


8) A quick “SOC-ready” checklist

  • Time sync is working (chrony/ntp) across all hosts
  • journald persistence is configured if you need logs across reboots
  • logrotate is rotating + compressing correctly
  • rsyslog forwarding is enabled (TCP for lab, TLS for real)
  • Log server stores logs in a predictable structure (per-host/per-program)
  • You can generate test events with logger and see them end-to-end

Conclusion

With facility/severity understanding, a predictable rotation policy, a sane retention plan, and rsyslog forwarding, you’ve turned logging into a system. That’s the foundation for everything that comes next: correlation, alerting, dashboards, and incident response workflows.

If your next step is a SIEM stack (Wazuh/ELK/OpenSearch), this exact setup will pay off immediately: your logs will be clean, timestamped, and centrally available—ready for parsing and detection rules.


FAQ

Should I forward everything or only warnings/errors?

In a lab, forward everything first so you learn patterns. In production, you can reduce noise later—just don’t filter too aggressively at the start.

Is journald enough without rsyslog?

Journald is great locally and for systemd services. But for centralized logging and remote forwarding, rsyslog is still a standard workhorse.

When should I use TLS for syslog?

Any time logs traverse a network you don’t fully trust (which is basically always). TLS (ideally mTLS) is the professional approach.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.