Why I don’t like systemd

Since a lot of other people have been writing about systemd recently, I decided I’d post my own thoughts about it. Note that I jotted these notes down in late 2012, before systemd got forcibly rammed into the plans for pretty much every Linux distribution, and based on reading all Lennart’s postings about it. It’s possible that some of these issues have been addressed.

  1. With systemd, every daemon has to hand over socket handling to systemd. That means portable daemons will have two different code paths, depending on whether they’re running on Linux or not. That’s a testing and reliability headache for developers.

  2. Because systemd controls the sockets, it will end up being a dependency of every daemon, unless distributions ship both systemd and non-systemd packages for all their daemons. No doubt that’s good for forcing systemd adoption, but it’s going to be a pain for everyone else.

  3. Got a daemon you want to run that hasn’t been systemd-ized? Good luck with that.

  4. Seriously, it’s 2012 and you’re trying to introduce a new configuration file syntax modeled on Windows .INI files?

  5. Systemd has a deliberately undocumented binary log file format, in an attempt to replace syslog. And no, you can’t turn off systemd’s syslog replacement and use a standard syslog.

  6. With systemd written in C and controlling all the TCP/IP ports, it will become a primary attack vector for crackers and malware. Because it’s the init daemon, every time a systemd security update is pushed, you’ll need to reboot. But don’t worry, I’m sure the author of PulseAudio can write the kind of bug-free code necessary for good security, right?

Regarding item 5, the document arguing in favor of systemd having its own logging system gives a number of bogus reasons, so let’s go through them:

The message data is generally not authenticated

It can be if you configure syslog that way.

The data logged is very free-form. Automated log-analyzers need to parse human language strings to a) identify message types, and b) parse parameters from them.

Doesn’t have to be that way.

The timestamps generally do not carry timezone information, even though some newer specifications define support for it.

I wrote about that in 2005 and a fix has been available since 2010. Here’s a log line from my server:

<22>1 2014-05-18T14:22:56.950703-05:00 castor postfix 3139 - -  D4F38C1317: removed

Syslog is only one of many log systems on local machines. Separate logs are kept for utmp/wtmp, lastlog, audit, kernel logs, firmware logs, and a multitude of application-specific log formats.

Adding systemd is not going to magically stop bad software from writing its own logs in its own format, any more than the existence of syslog did. And the fact that syslog writes to multiple separate log files rather than one huge database is a feature, not a bug.

Reading log files is simple but very inefficient. Many key log operations have a complexity of O(n). Indexing is generally not available.

If only you could syslog to a database… Oh, wait, you can.

The syslog network protocol is very simple, but also very limited. Since it generally supports only a push transfer model, and does not employ store-and-forward, problems such as Thundering Herd or packet loss severely hamper its use.

Whereas systemd sends network log events over HTTP, a protocol known for its reliability, efficiency and store-and-forward support?

Log files are easily manipulable by attackers, providing easy ways to hide attack information from the administrator.

Also fixable with properly configured standard syslog.

Unless manually scripted by the administrator a user either gets full access to the log files, or no access at all.

…which is why syslog writes to multiple log files, so that access can be handled on a granular basis.

The meta data stored for log entries is limited, and lacking key bits of information, such as service name, audit session or monotonic timestamps.

Repetition, see above.

Automatic rotation of log files is available, but less than ideal in most implementations: instead of watching disk usage continuously to enforce disk usage limits rotation is only attempted in fixed time intervals, thus leaving the door open to many DoS attacks.

It’s not hard to imagine how this could be fixed by integrating rotation into syslog. In fact, there are already implementations via logging to channels.

Compression in the log structure on disk is generally available but usually only as effect of rotation and has a negative effect on the already bad complexity behaviour of many key log operations.

You can log to zip files if you care mostly about disk space, or log to a database if you care mostly about speed.

Classic Syslog traditionally is not useful to handle early boot or late shutdown logging, even though recent improvements (for example in systemd) made this work.

That’s why we have the kernel ring buffer and klogd, and then transfer those initial log records to syslog later on.

Binary data cannot be logged, which in some cases is essential (Examples: ATA SMART blobs or SCSI sense data, firmware dumps)

I see that as a feature. You log the binary blob as-is in a directory, then log its name and the other metadata in the syslog.

Basically, I don’t see any functionality in systemd’s journal that necessitates replacing syslog.

Controller-based server-side validation in XPages JavaScript

Think about your last tax return. If you have investment income, a whole additional set of questions has to be answered. If you own overseas investments, that’s another set of fields to fill out. Disposed of any stock? That’ll be more questions.

My experience is that most business forms end up like this. They start out simple, but sooner or later someone says “Well, if they answer yes to this question, we’ll need to ask them another question with three possible answers, and depending on their answer they might need to put a project code in another field…” and suddenly you’re off down the dynamic forms rat hole.

XForms makes dynamic forms pretty easy. You can use an xp:panel to replace a div element, and then use partial refresh to make the contents of the div dynamic. Because you’re refreshing the DOM tree and not just making things visible or invisible with CSS, you don’t need to have every possible section present on the form at initial page load, which helps performance.

In principle, validation of XPages forms is easy too. You select a component which represents some sort of data input field, add one or more validators, and put an error display control somewhere suitable.

The problem is that when the form has sections which appear and disappear, simple field-by-field validation can explode in complexity quite quickly. If a checkbox makes ten new fields appear in a section, then every validator on those fields must check the checkbox state before deciding whether to throw an error.

Worse, the actual document won’t necessarily have been updated if the form is being used to create a new document. So the checkbox state will need to be checked using getComponent(), which is relatively slow.

So my challenge was to find out how to concentrate all my validation logic into a single controller function, ideally to be written in server-side JavaScript — yet still to put the error messages in the appropriate error display components.

The answer turns out to be a bit like doing validation in LotusScript in the Notes client: you use the querySaveDocument event. In XPages, though, the event is associated with a dominoDocument data source rather than a form, because a single XPages form can have multiple documents associated with it.

The one piece of somewhat subtle code required is a function to create the error notifications, so here’s that code, adapted from an example by Don Mottolo:

The component argument can either be the actual component object, or its XPages component ID. Notice that the target component is flagged as invalid, as well as having the error message added to the JSF context.

An XPage which uses the above code will start off something like this:

<xp:view xmlns:xp="http://www.ibm.com/xsp/core" xmlns:xc="http://www.ibm.com/xsp/custom">

  <!-- Load the validation helper -->
    <xp:script src="/mx_validation.jss" clientSide="false"></xp:script>

    <xp:dominoDocument formName="Order" 
      var="document1" action="createDocument" >
  var result = true;
  if (document1.getItemValueString("Name").isEmpty()) {
    Validation.postError("name", "You must enter your name.");
    result = false;
  return result; }]]></xp:this.querySaveDocument>

Here we’re validating that a single field with the id ‘name’ has a value.

Remember that if you use postError to post an error for a field, that field must have an error display component associated with it, or the page must have a multiple error display component. Otherwise the errors will never be displayed to the user, and you’ll get errors in the server log.

Obviously the entire validation function can easily be factored out of the XPage into its own SSJS library.

You can also use the querySave function to add fields to your document and perform other random computations on it, just like you would in LotusScript in the Notes client.

Teletext and hypertext

Before the web, we had Teletext.


As a child growing up with a home computer, I was fascinated by Teletext. Unfortunately, we didn’t have a TV with a Teletext decoder, as those were pretty pricey. My only chance to enjoy real Teletext was when visiting relatives. However, I had a BBC Computer, which had a Teletext mode known as Mode 7.


In Mode 7, you could put a control character at a particular point on the screen, and all character cells to the right of that cell would display block graphics in a particular color when filled with a character code 160 or higher. Each character cell was split into 6 pixels, for 64 possible combinations represented by characters 160 to 255. Something similar exists today in the form of the Unicode block-drawing characters.

The Mode 7 screen was 40×25, so your maximum graphical resolution was 78×75, with the left column of cells being used up switching on graphics mode. Even for the 1980s that was pretty awful, but it had the benefit that an entire screen of text was only 1000 bytes. On a system with just 32K of memory in total, where video RAM was taken out of total system RAM, that was a major benefit. Plus, it meant you could fit 100 pages on a floppy disk!

As well as the poor resolution, there were obviously limits on which colors could be next to other colors, and how complicated your shapes could be. But the biggest problem was that working out which character codes you needed to (say) draw a box on the screen was a major pain. So in 1984 I wrote a teletext graphics editor which would allow me to create my own Teletext pages, with an easier way to draw lines and shapes and select colors. It also eventually supported automatic generation of banner text, text written in pixels. (Which, because the pixels were so big, was huge.)

I shared the program with friends at school, where we had a lab of BBC Micros. I also sent a copy to Personal Computer World magazine (PCW), who (to my surprise) published it. (This was back in the days when computer magazines printed listings in BASIC which readers would type in for themselves, and also back in the days when PCW wasn’t all IBM PCs all the time.)

Encouraged by my unexpected success, I went on to develop my own network-aware Teletext information system for the school. Users could create as many pages as they wanted, stored in their network filespace. (An Amcom E-net system with a hard disk.) Each page could link to other pages by username and page number. You could select page links using cursor keys then hit a key to go to that page. It was basically a primitive multi-user hypertext system, like the web — but in 1985, 5 years before the first web browser. I had been inspired by reading about Ted Nelson’s IBM and Brown University hypertext project of the 1960s, so I don’t claim any genius level originality; hypertext was something that was going to happen, we were just waiting for networks to get good enough.

I didn’t stop with text and graphics, though. I extended the system to allow downloadable executable code, so you could use it to build a hypertextual menu of educational software which you could launch right from the menu screens.

For some reason, nobody else at school was as excited by the possibilities as I was. Eventually I got an Atari ST, went to university, and started building hypertext systems on Unix. The code for my network teletext system is long lost at this point. I don’t even have the floppy disks. I wish I had kept them, if for no other reason than to provide examples of prior art to fight patent trolls.

I finally got my own Teletext TV in 1990, and enjoyed it for a few years before emigrating to a country that had apparently never heard of anything that advanced.

For more examples of Teletext pages, see the Teletext museum. It’s now being rediscovered as an artistic medium.