supervision - discussion about system services, daemon supervision, init, runlevel management, and tools such as s6 and runit
 help / color / mirror / Atom feed
* restarting s6-svscan (as pid 1)
@ 2023-11-17 22:20 dan
  2023-11-17 22:38 ` adam
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: dan @ 2023-11-17 22:20 UTC (permalink / raw)
  To: supervision


This may be a weird question, maybe: is there any way to persuade
s6-svscan (as pid 1) to restart _without_ doing a full hardware reboot?
The use case I have in mind is: starting from a regular running system,
I want to create a small "recovery" system in a ramdisk and switch to it
with pivot_root so that the real root filesystem can be unmounted and
manipulated. (This is instead of "just" doing a full reboot into an
initramfs: the device has limited storage space and keeping a couple
of MB around permanently just for "maintenance mode" doesn't seem like a
great use of it)

I was thinking I could use the .svscan/finish script to check for the
existence of the "maintenance mode" ramfs, remount it onto /
and then `exec /bin/init` as its last action, though it seems a bit
cheesy to have a file called `finish` that actually sometimes performs
`single-user-mode` instead. Would that work?

Perhaps a more general use case for re-execing pid 1 would be after OS
upgrades as an alternative to rebooting - though other than wanting to
preserve uptime for bragging rights I can't see any real advantage...

Any thoughts?


Daniel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: restarting s6-svscan (as pid 1)
  2023-11-17 22:20 restarting s6-svscan (as pid 1) dan
@ 2023-11-17 22:38 ` adam
  2023-11-17 22:45 ` Steve Litt
  2023-11-18  0:58 ` Laurent Bercot
  2 siblings, 0 replies; 6+ messages in thread
From: adam @ 2023-11-17 22:38 UTC (permalink / raw)
  To: dan, supervision

Quoting dan@telent.net (2023-11-17 14:20:32)
> I was thinking I could use the .svscan/finish script to check for the
> existence of the "maintenance mode" ramfs, remount it onto /
> and then `exec /bin/init` as its last action, though it seems a bit
> cheesy to have a file called `finish` that actually sometimes performs
> `single-user-mode` instead. Would that work?

I think your use case is *precisely* what .svscan/finish is for -- it's how you
get s6-svscan to exec() into some other process.  That other process can be an
instance of itself.

The fact that systemd has a special self-exec() mechanism always seemed weird
and bizzarre.  Just use the general mechanism.

> Perhaps a more general use case for re-execing pid 1 would be after OS
> upgrades as an alternative to rebooting

Sure, if you upgrade the libc or your compiler, and you want s6-svscan to use
those new libc/compiler, this is an easy way to do it.

> though other than wanting to preserve uptime for bragging rights I can't see
> any real advantage...

You can pass arbitrary large chunks of data to a re-exec()'ed pid1.  It's not
always easy to do that across a reboot, since you have to pass the data to the
bootloader and back.  Also "the data" could be open file descriptors.

  - a

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: restarting s6-svscan (as pid 1)
  2023-11-17 22:20 restarting s6-svscan (as pid 1) dan
  2023-11-17 22:38 ` adam
@ 2023-11-17 22:45 ` Steve Litt
  2023-11-18  0:58 ` Laurent Bercot
  2 siblings, 0 replies; 6+ messages in thread
From: Steve Litt @ 2023-11-17 22:45 UTC (permalink / raw)
  To: supervision

dan@telent.net said on Fri, 17 Nov 2023 22:20:32 +0000


>I was thinking I could use the .svscan/finish script to check for the
>existence of the "maintenance mode" ramfs, remount it onto /
>and then `exec /bin/init` as its last action, though it seems a bit
>cheesy to have a file called `finish` that actually sometimes performs
>`single-user-mode` instead. Would that work?

I don't know if it would work. Why not try it. Also, you might need to
make changes to the 3 script.

As far as "cheesy", the only disadvantage is self-documentation, and
self-documentation could be preserved by adding a properly named script
and calling or exec'ing it.

SteveT

Steve Litt 

Autumn 2023 featured book: Rapid Learning for the 21st Century
http://www.troubleshooters.com/rl21

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: restarting s6-svscan (as pid 1)
  2023-11-17 22:20 restarting s6-svscan (as pid 1) dan
  2023-11-17 22:38 ` adam
  2023-11-17 22:45 ` Steve Litt
@ 2023-11-18  0:58 ` Laurent Bercot
  2023-11-19  0:31   ` Daniel Barlow
  2 siblings, 1 reply; 6+ messages in thread
From: Laurent Bercot @ 2023-11-18  0:58 UTC (permalink / raw)
  To: dan, supervision

>This may be a weird question, maybe: is there any way to persuade
>s6-svscan (as pid 1) to restart _without_ doing a full hardware reboot?
>The use case I have in mind is: starting from a regular running system,
>I want to create a small "recovery" system in a ramdisk and switch to it
>with pivot_root so that the real root filesystem can be unmounted and
>manipulated. (This is instead of "just" doing a full reboot into an
>initramfs: the device has limited storage space and keeping a couple
>of MB around permanently just for "maintenance mode" doesn't seem like a
>great use of it)

  As Adam said, this is exactly what .s6-svscan/finish is for. If your
setup somehow requires s6-svscan to exec into something else before
you shut the machine down, .s6-svscan/finish is the hook you have to
make it work.
  Don't let the naming stop you. The way to get strong supervision with
BSD init is to list your services in /etc/ttys. You can't get any
cheesier than that.

  That said, I'm not sure your goal is as valuable as you think it is.
If you have a running system, by definition, it's running. It has 
booted,
and you have access to its rootfs and all the tools on it; there is
nothing to gain by doing something fragile such as exec'ing into
another pid 1 and pivot_rooting. Unless I've missed something, the
amount of space you'll need for your maintenance system will be the
exact same whether you switch to it from your production system or from
cold booting.

--
  Laurent


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: restarting s6-svscan (as pid 1)
  2023-11-18  0:58 ` Laurent Bercot
@ 2023-11-19  0:31   ` Daniel Barlow
  2023-11-19  1:46     ` Laurent Bercot
  0 siblings, 1 reply; 6+ messages in thread
From: Daniel Barlow @ 2023-11-19  0:31 UTC (permalink / raw)
  To: Laurent Bercot, supervision

"Laurent Bercot" <ska-supervision@skarnet.org> writes:

>   That said, I'm not sure your goal is as valuable as you think it is.
> If you have a running system, by definition, it's running. It has 
> booted,
> and you have access to its rootfs and all the tools on it; there is
> nothing to gain by doing something fragile such as exec'ing into
> another pid 1 and pivot_rooting. Unless I've missed something, the
> amount of space you'll need for your maintenance system will be the
> exact same whether you switch to it from your production system or from
> cold booting.

I would agree with you generally, but in this case the running system
has a readonly squashfs filesystem which can't be updated except by
flashing a complete new filesystem image on top of it.  (I have tried
doing this on the running system just to see what would happen, but the
result was as about terminal as you might imagine it would be.) I
believe (have not yet tested) that I can relatively simply create the
maintenance system on the fly by copying a subset of the root fs into a
ramdisk, so it doesn't take any space until it's needed.


-dan

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: restarting s6-svscan (as pid 1)
  2023-11-19  0:31   ` Daniel Barlow
@ 2023-11-19  1:46     ` Laurent Bercot
  0 siblings, 0 replies; 6+ messages in thread
From: Laurent Bercot @ 2023-11-19  1:46 UTC (permalink / raw)
  To: Daniel Barlow, supervision


 > I believe (have not yet tested) that I can relatively simply create 
the
>maintenance system on the fly by copying a subset of the root fs into a
>ramdisk, so it doesn't take any space until it's needed.

  The problem with that approach is that your maintenance system now
depends on your production system: after a rootfs change, you don't
have the guarantee that your maintenance system will be identical to
the previous one. Granted, the subset you want in your maintenance fs
is likely to be reasonably stable, but you never know; imagine your
system is linked against glibc, you copy libc.so.6, but one day one
of the coreutils grows a dependency to libpthread.so, and voilà, your
maintenance fs doesn't work anymore.

  You probably think the risk is small, and you're probably correct.
I just have a preference for simple solutions, and I believe that a
small, stable, statically-linked maintenance squashfs would be worth
the space it takes on your drive. :)

--
  Laurent


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-11-19  1:46 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-17 22:20 restarting s6-svscan (as pid 1) dan
2023-11-17 22:38 ` adam
2023-11-17 22:45 ` Steve Litt
2023-11-18  0:58 ` Laurent Bercot
2023-11-19  0:31   ` Daniel Barlow
2023-11-19  1:46     ` Laurent Bercot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).