supervision - discussion about system services, daemon supervision, init, runlevel management, and tools such as s6 and runit
 help / color / mirror / Atom feed
From: "Laurent Bercot" <>
To: "Arjun D R" <>,
Subject: Re: S6 Queries
Date: Mon, 02 Aug 2021 08:27:30 +0000	[thread overview]
Message-ID: <em5397e5ea-1525-404b-9490-b39ef0cea0a6@elzian> (raw)
In-Reply-To: <>

>1. In systemd, the services are grouped as targets and each target depends
>on another target as well. They start as targets. [ex: Reached
>, Reached, Reached UI target,...]. Is there
>any way in S6 to start the init system based on bundles?

  Yes, that is what bundles are for. In your stage 2 boot script
(typically /etc/s6-linux-init/current/scripts/rc.init), you should
invoke s6-rc as:
   s6-rc change top
if "top" is the name of your default bundle, i.e. the bundle that
contains all the services you want to start at boot time. You can
basically convert the contents of your systemd targets directly into
contents of your s6-rc bundles; and you decide which one will be
brought up at boot time via the s6-rc invocation in your stage 2
init script.

>2. Are there any ways to have loosely coupling dependencies? In systemd, we
>have After=. After option will help the current service to start after the
>mentioned service (in after). And the current service will anyway start
>even if the mentioned service in After fails to start. Do we have such
>loosely coupled dependency facility in S6?

  Not at the moment, no. The next version of s6-rc will allow more types
of dependencies, with clearer semantics than the systemd ones (After=, 
Requires= and Wants= are not orthogonal, which is unintuitive and causes
misuse); but it is still in early development.

  For now, s6-rc only provides one type of dependency, which is the
equivalent of Requires+After. I realize this is not flexible enough
for a lot of real use cases, which is one of the reasons why another
version is in development. :)

>3. Is there any tool available in S6 to measure the time taken by each
>service to start? We can manually measure it from the logs, but still
>looking for a tool which can provide accurate data.

  Honestly, if you use the -v2 option to your s6-rc invocation, as in
   s6-rc -v2 change top
and you ask the catch-all logger to timestamp its lines (which should
be the default, but you can change the timestamp style via the -t
option to s6-linux-init-maker)
then the difference of timestamps between the lines:
   s6-rc: info: service foo: starting
   s6-rc: info: service foo successfully started
will give you a pretty accurate measurement of the time it took service
foo to start. These lines are written by s6-rc exactly as the
"starting" or "completed" event occurs, and they are timestamped by
s6-log immediately; the code path is the same for both events, so the
delays cancel out, and the only inaccuracy left is randomness due to
scheduling, which should not be statistically significant.

  At the moment, the s6-rc log is the easiest place to get this data
from. You could probably hack something with the "time" shell command
and s6-svwait, such as
   s6-svwait -u /run/service/foo ; time s6-svwait -U /run/service/foo
which would give you the time it took for foo to become ready; but
I doubt it would be any more accurate than using the timestamps in the
s6-rc logs, and it's really not convenient to set up.

>4. Does the S6 init system provide better boot up performance compared to
>systemd ?  One of our main motives is to attain better bootup performance.
>Is our expectation correct?

  The boot up performance should be more or less *similar* to systemd.
The code paths used by the s6 ecosystem are much shorter than the ones
used by systemd, so _in theory_ you should get faster boots with s6.

  However, systemd cheats, by starting services before their dependencies
are ready. For instance, it will start services before any kind of
logging is ready, which is pretty dangerous for several reasons. As a
part of its "socket activation" thing, it will also claim readiness
on some sockets before even attempting to run the actual serving
processes (which may fail, in which case readiness was a lie.)
Because of that, when everything goes well, systemd cuts corners that
s6 does not, and may gain some advantage.

  So all in all, I expect that depending on your system, the difference
in speed will not be remarkable. On very simple setups (just a few
services), systemd's overhead may be noticeable and you may see real
improvements with s6. On complex setups with lots of dependencies, s6
might still have the speed advantage but I don't think it will be
anything amazing. The real benefit of s6 is that it achieves
roughly the same speed as systemd *while being more reliable and

  If you actually make bootup speed comparisons between systemd and s6,
please share them! I am interested in that kind of benchmarks, and
I'm sure the community would like to see numbers as well.


  reply	other threads:[~2021-08-02  8:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-02  4:54 Arjun D R
2021-08-02  8:27 ` Laurent Bercot [this message]
2021-08-02 18:07   ` Steve Litt
2021-08-02 19:39     ` Laurent Bercot
2021-08-02 20:42       ` Steve Litt
2021-08-02 21:40         ` Laurent Bercot
2021-08-11 11:05   ` Arjun D R
2021-08-11 14:21     ` Laurent Bercot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=em5397e5ea-1525-404b-9490-b39ef0cea0a6@elzian \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).