The commands
Code: Select all
systemd-analyze critical-chain
systemd-analyze blame
Code: Select all
$ systemd-analyze blame
6.573s plymouth-quit-wait.service
1.636s snapd.service
1.362s e2scrub_reap.service
1.098s fwupd.service
959ms udisks2.service
928ms dev-sdc4.device
823ms containerd.service
655ms systemd-journal-flush.service
644ms ModemManager.service
556ms snapd.seeded.service
555ms systemd-udevd.service
447ms accounts-daemon.service
425ms gdm.service
381ms polkit.service
...
Code: Select all
$ systemd-analyze critical-chain
...
graphical.target @10.936s
└─multi-user.target @10.936s
└─plymouth-quit-wait.service @4.361s +6.573s
└─systemd-user-sessions.service @4.321s +28ms
└─network.target @4.288s
└─wpa_supplicant.service @4.063s +223ms
└─dbus.service @3.791s +242ms
└─basic.target @3.780s
└─sockets.target @3.780s
└─snapd.socket @3.757s +22ms
└─sysinit.target @3.753s
└─snapd.apparmor.service @3.589s +164ms
└─apparmor.service @3.280s +290ms
└─local-fs.target @3.279s
└─run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount @3.289s
└─local-fs-pre.target @1.306s
└─systemd-tmpfiles-setup-dev.service @1.281s +24ms
└─systemd-sysusers.service @1.252s +28ms
└─systemd-remount-fs.service @1.212s +26ms
└─systemd-journald.socket @1.180s
└─system.slice @1.159s
└─-.slice @1.159s
So then I did
Code: Select all
systemd-analyze plot > boot_analysis.svg
After some googling I learned that they might have something to do with snapd packages?
I have also made other plots, where the same devices are initialized much faster:
Am I understanding correctly, that the dev-loop devices contribute a lot to my startup time? (And that plymouth-quit-wait.service shouldn't have anything to do with it?)
Why could it be, that they appear to start much faster in the plot sometimes? Am I overlooking/misunderstanding something?
How can I prevent those devices taking so much time without removing snapd?
Any help is much appreciated!
Thanks!