You seem pretty smart, but I suspect somewhat clouded by your irrational fear of systemd. Were that not the case, you would realize there are perfectly legitimate counterpoints to every item listed above making them positives in favor of systemd. I'm not going to argue them with you because it's already been done ad nausem. Maybe you missed it. But they almost all to a "t" come in the theme of better, more reliable performance, the logging in particular, being a perfect example. Maybe you've never administered a system that logs enough to choke the fastest I/O available, dunno.
I don't give a crap about performance. No, really, I don't. I care more about availability: The last administration job I did I was in charge of backups and a build farm. I can't recall a time in the last 25 years where I actually needed to squeak out every last cycle on a machine rather than either add or design in buffer hardware in case of growth. In other words design to be scalable from the get go.
Why I care about availability:
1: I need systems I designed to be available. Period. This includes services both on my main machines and on my failovers. If a service dies on a single machine I want it to STAY DEAD until my monitors can either switch to a failover machine or or take that machine out of the failover pool. SystemD will just keep restarting the service ( by default anyways ). I see this as bad design either way you look at it.
1a: that means that a service could actually be down in a state of being restarted on a failover machine if data is then sent to it. Better to have the service stay dead so I can figure out WHY it fell on its face and correct the issue. Even if it is a semi random occurrence.
2: I can always throw more hardware at a problem if there is a desperate need for more performance. I will say I have never had issues come up where I had to run around with my hair on fire trying to get more hardware in an emergency. I planned ahead and made sure I had enough spare capacity to be a buffer in case of SHTF scenarios. It was kind of my job, and why after the first time I left I was offered almost triple my original pay to come back and clean up after the guy who replaced me decided my "oldschool ways of setting everything up" cost the company millions in a few days of backups that didn't actually backup to anywhere in his "new ideas to make everything easier".
3: for every argument that someone says is solved because systemD "has more performance" I can point out at least one area where it comes up worse compared to sysV / the UNIX philosophy ( which I might add has withstood the test of time... ~47 years so far). I also covered why, at least in the case of services that is always trotted out, the "reliability" argument is actually an argument to avoid using that system.
As an aside, if you are saturating IO with logs
you are either doing something VERY wrong like logging an entire cluster to a single non raid arrayed disk, or you have some really serious problems like an array dropping a ton of disks and trying to rebuild itself from spares on the fly while still logging and being used. That or truly poor designs that didn't scale with your growth. At all.
All that being said, I am not afraid of systemD. I don't like it, for many reason including the feature creep and bloat, but I still use it on some of my machines. It works, mostly. Sometimes it needs a little tuneup with a large hammer ( more often than SysV ever did for me anyways ), but it does work. I don't like it, but I don't specifically avoid it. Just like Windows / OS/X / BSD ETC. I don't really like any OS since none of them are perfect, they all have at least a few ugly warts. Well, except for iOS, that's just a toy piece of crap crammed into a phone. I use whatever OS best fits the needs of the job I am doing at the time.
SystemD is OK for Desktops / Laptops, but a pretty poor choice for servers, at least in my grumpy old network designers opinion.