Useful journald tweaks

Flush old logs in journalctl

By date or by size:

    sudo journalctl --vacuum-time=2d
    sudo journalctl --vacuum-size=500M

Tail journalctl

journalctl -f

For a specific service:

journalctl -u httpd -f

Store logs on disk

(from http://unix.stackexchange.com/questions/159221/how-display-log-messages-from-previous-boots-under-centos-7)

On CentOS 7, you have to enable the persistent storage of log messages:

# mkdir /var/log/journal
# systemd-tmpfiles --create --prefix /var/log/journal
# systemctl restart systemd-journald

Otherwise, the journal log messages are not retained between boots. This is the default on Fedora 19+.

Writing service files in systemd

image source
image source

Systemd may not be the most popular init system out there, and it still manages to annoy me every now and then, but I had a chance to play with it at work recently. A deeper dive into it has left me a little happier with how it handles things.

My requirement was to daemonize a service and have it running at startup. The service has a few command line arguments, and also needs some environment variables set. The command line args could change but should not expect someone to reconfigure the init script. Systemd supported all these requirements out of the box.

Here is what the service file looked like:

[Unit]
 Description=myservice
[Service]
 Environment="LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH"
 EnvironmentFile=/etc/<somefile.cfg>
 ExecStart=/usr/bin/myservice ${arg1} ${arg2} ${arg3}
[Install]
 WantedBy=multi-user.target

The EnvironmentFile has name value pairs of the command line arguments:

arg1=foo
arg2=bar
arg3=baz

There are numerous others configuration parameters for the service. For instance, set Type=oneshot if the application is not a daemon and is going to exit immediately.

Use ExecStartPost for running additional scripts after the first one starts (e.g. maybe you want to write to a pidfile). ExecStartPre, ExecStopPost and ExecStopPre also exist.

The unti file must be present in /etc/systemd/system. To make it run on subsequent boots:

systemctl enable myservice

To start the service:

systemctl start myservice

To check the service’s logs:

journalctl -u myservice

Further reading:

How I reduced 40+ seconds from my Fedora’s boot time

My laptop has been running Fedora for some time now and I like it a lot. Since it’s pretty old, I’ve always forgiven the painfully slow boot time it had. After upgrading to Fedora 20 (Heisenbug), I decided to see if I could do something about it. Turns out I certainly could.

Analysis

The first step was to understand what was slowing down the booting most. systemd-analyze is a nice tool for this. I ran it like this:
systemd-analyze plot >~arun/plot.svg
And it gave me this neat graph:

orig-boot-plot

Unnecessary services

There’s a lot of stuff there that I’m not familiar with. Rather than going through each in order, I decided to wipe out a bunch of unneeded services in one go (found in this detailed guide). So:
for i in abrt*.service auditd.service avahi-daemon.* bluetooth.* dev-hugepages.mount dev-mqueue.mount fedora-configure.service fedora-loadmodules.service fedora-readonly.service ip6tables.service  irqbalance.service mcelog.service rsyslog.service sendmail.service sm-client.service sys-kernel-config.mount sys-kernel-debug.mount; do systemctl mask $i; done

Dynamic Firewall

Firewalld seemed slow for me as well, and I didn’t need a dynamic firewall for my simple uses. I replaced it with good old iptables like this:
systemctl mask firewalld.service
systemctl enable iptables.service
systemctl enable ip6tables.service
The mask option is similar to (but stronger than) disable, and permanently disables the service. The default iptables rules disallow all incoming connections so no further tweaks were needed here.

Plymouth

plymouth-quit-wait-service was the next long culprit I tackled. This turned out to be a bug where a non-existent file was being loaded. The 25th comment in the link is what I used to work around it.

Journald

Finally  there was one long pause that I didn’t understand (from the 15th to the 38th second in that graph). After some digging I discovered this weird snippet in journalctl’s output:
Dec 09 11:55:25 oroboros systemd[1]: Starting Trigger Flushing of Journal to Persistent Storage…
Dec 09 11:55:25 oroboros systemd[1]: Starting Tell Plymouth To Write Out Runtime Data…
Dec 09 11:55:25 oroboros systemd[1]: Starting Security Auditing Service…
Dec 09 11:55:25 oroboros systemd[1]: Starting Recreate Volatile Files and Directories…
Dec 09 11:55:26 oroboros systemd-journal[197]: Allowing system journal files to grow to 2.9G.
Dec 09 11:56:10 oroboros systemd-journal[197]: Forwarding to syslog missed 57 messages.

The snippet is from an earlier boot but the  massive gap between the two messages at the end seemed indicative of the problem I was looking for. Subsequent digging revealed that journalctl was storing messages dating back several months, and had really grown in size:
journalctl –disk-usage
Journals take up 947.2M on disk.

My fix was to switch the Storage setting to volatile in /etc/systemd/journald.conf:
Storage=volatile

Since then I have reverted it to the Auto setting it had earlier, and kept saner values of 100MB each for the parameters SystemMaxUse and SystemMaxFileSize. 

Result

The final graph now looks like this:

final-boot-plot

A noticeable improvement 🙂