How I reduced 40+ seconds from my Fedora’s boot time

My laptop has been running Fedora for some time now and I like it a lot. Since it’s pretty old, I’ve always forgiven the painfully slow boot time it had. After upgrading to Fedora 20 (Heisenbug), I decided to see if I could do something about it. Turns out I certainly could.

Analysis

The first step was to understand what was slowing down the booting most. systemd-analyze is a nice tool for this. I ran it like this:
systemd-analyze plot >~arun/plot.svg
And it gave me this neat graph:

orig-boot-plot

Unnecessary services

There’s a lot of stuff there that I’m not familiar with. Rather than going through each in order, I decided to wipe out a bunch of unneeded services in one go (found in this detailed guide). So:
for i in abrt*.service auditd.service avahi-daemon.* bluetooth.* dev-hugepages.mount dev-mqueue.mount fedora-configure.service fedora-loadmodules.service fedora-readonly.service ip6tables.service  irqbalance.service mcelog.service rsyslog.service sendmail.service sm-client.service sys-kernel-config.mount sys-kernel-debug.mount; do systemctl mask $i; done

Dynamic Firewall

Firewalld seemed slow for me as well, and I didn’t need a dynamic firewall for my simple uses. I replaced it with good old iptables like this:
systemctl mask firewalld.service
systemctl enable iptables.service
systemctl enable ip6tables.service
The mask option is similar to (but stronger than) disable, and permanently disables the service. The default iptables rules disallow all incoming connections so no further tweaks were needed here.

Plymouth

plymouth-quit-wait-service was the next long culprit I tackled. This turned out to be a bug where a non-existent file was being loaded. The 25th comment in the link is what I used to work around it.

Journald

Finally  there was one long pause that I didn’t understand (from the 15th to the 38th second in that graph). After some digging I discovered this weird snippet in journalctl’s output:
Dec 09 11:55:25 oroboros systemd[1]: Starting Trigger Flushing of Journal to Persistent Storage…
Dec 09 11:55:25 oroboros systemd[1]: Starting Tell Plymouth To Write Out Runtime Data…
Dec 09 11:55:25 oroboros systemd[1]: Starting Security Auditing Service…
Dec 09 11:55:25 oroboros systemd[1]: Starting Recreate Volatile Files and Directories…
Dec 09 11:55:26 oroboros systemd-journal[197]: Allowing system journal files to grow to 2.9G.
Dec 09 11:56:10 oroboros systemd-journal[197]: Forwarding to syslog missed 57 messages.

The snippet is from an earlier boot but the  massive gap between the two messages at the end seemed indicative of the problem I was looking for. Subsequent digging revealed that journalctl was storing messages dating back several months, and had really grown in size:
journalctl –disk-usage
Journals take up 947.2M on disk.

My fix was to switch the Storage setting to volatile in /etc/systemd/journald.conf:
Storage=volatile

Since then I have reverted it to the Auto setting it had earlier, and kept saner values of 100MB each for the parameters SystemMaxUse and SystemMaxFileSize. 

Result

The final graph now looks like this:

final-boot-plot

A noticeable improvement 🙂