supervision - discussion about system services, daemon supervision, init, runlevel management, and tools such as s6 and runit
 help / color / mirror / Atom feed
From: Cameron Nemo <camerontnorman@gmail.com>
To: supervision@list.skarnet.org
Subject: Fwd: emergency IPC with SysV message queues
Date: Sun, 19 May 2019 12:59:43 -0700	[thread overview]
Message-ID: <CALZWFR+6XJZBz3Y8Kn6PKMPc5VopLSvUJxEZMNBdaOAYTA-89w@mail.gmail.com> (raw)
In-Reply-To: <CALZWFRJws+eDthy8MCFj4PiVdRdRpM+afda6QAbEtYA4NT8ESQ@mail.gmail.com>

On Sun, May 19, 2019 at 10:54 AM Jeff <sysinit@yandex.com> wrote:
> [...]
> On Thu, May 16, 2019 at 09:25:09PM +0000, Laurent Bercot wrote:
> > [...]
> > Okay, so your IPC mechanism isn't just message queues, it's a mix
> > of two different channels: message queues *plus* signals.
>
> well, no. the mechanism is SysV msg queues and the protocol for
> clients to use to communicate includes - among other things - notifying
> the daemon (its PID is well known) by sending a signal to wake it up and
> have it processes the request input queue.
> you do not use just fifos (the mechanism), there is also a protocol
> involved that clients and server use.

What details need to be conveyed other than "stand up", "sit down",
and "roll over" (boot, sigpwr, sigint)?

> > Signals for notification, message queues for data transmission. Yes,
> > it can work, but it's more complex than it has to be, using two Unix
> > facilities instead of one.
>
> indeed, this is more complex than - say - just sockets. on the other
> hand it does not involve any locking to protect against concurrently
> accessing the resource as it would have done with a fifo.
>
> and again: it is just an emergency backup solution, the preferred way
> are (Linux: abstract) unix sockets of course. such complicated ipc is
> not even necessary in my case, but for more complex and integrated
> inits it is. that was why i suggested in order to make their ipc
> independent of rw fs access.

Abstract namespace sockets have two shortcomings:

* not portable beyond linux
* need to use access restrictions
* shared across different mount namespaces;
  one needs a new network namespace for different instances

I am considering dropping it for a socket in /run in my supervisor.

--
Cameron

>
> and of course one can tell a reliable init by the way it does ipc.
>
> > You basically need a small library for the client side. Meh.
>
> right, the client has to know the protocol.
> first try via the socket, then try to reach init via the msg queue.
> for little things like shutdown requests signaling suffices.
>
> > A fifo or a socket works as both a notification mechanism and a
> > data transmission mechanism,
>
> true, but the protocol used by requests has to be desinged as well.
> and in the case of fifos: they have to be guarded against concurrent
> writing by clients via locking (which requires rw fs access).
>
> > and it's as simple as it gets.
>
> the code used for the msg queueing is not complicated either.
>
> > Yes, they can... but on Linux, they are implemented via a virtual
> > filesystem, mqueue. And your goal, in using message queues, was to
> > avoid having to mount a read-write filesystem to perform IPC with
> > process 1 - so that eliminates them from contention, since mounting
> > a mqueue is just as heavy a requirement as mounting a tmpfs.
>
> indeed, they usually live in /dev/mqueue while posix shared memory
> lives in /dev/shm.
>
> that was reason that i did not mention them in the first place
> (i dunno if OpenBSD has them as they usually lag behind the other
> unices when it comes to posix conformance).
>
> i just mentioned them to point out that you can be notified about
> events involving the posix SysV ipc successors.
> i never used them in any way since they require a tmpfs for this.
>
> > Also, it is not clear from the documentation, and I haven't
> > performed tests, but it's even possible that the Linux implementation
> > of SysV message queues requires a mqueue mount just like POSIX ones,
> > in which case this whole discussion would be moot anyway.
>
> which in fact is not the case, try it with "ipcmk -Q", same for the
> other SysV ipc mechanisms like shared memory and semaphores.
> you can see that easily when running firefox. it uses shared memory
> without semaphores akin to "epoch" (btw: if anyone uses "epoch" init
> it would be interesting to see what ipcs(1) outputs).
> this is in fact a very fast ipc mechanism (the fastest ??), though
> a clever protocol must be used to avoid dead locks, concurrent accesses
> and such. the msg queues have the advantage that messages are already
> separated and sorted in order of arrival.
>
> > You've lost me there. Why do you want several methods of IPCs in
> > your init system? Why don't you pick one and stick to it?
>
> since SysV msg queues are a quite portable ipc mechanism that does
> not need any rw access. so they make up for a reliable ipc backup
> emergency method.
>
> > Sockets are available on every Unix system.
>
> these days (IoT comes to mind). but i guess SysV init (Linux) does
> not use them since there might have been kernels in use without
> socket support (?? dunno, just a guess).
> on the BSDs this should be true since it was said that they implement
> piping via socketpairs.
>
> > So are FIFOs.
>
> i do not like to use them at all, especially since they need rw
> (is that true everywhere ??).
>
> > If you're going to use sockets by default, then use sockets,
> > and you'll never need to fall back on SysV IPC, because sockets work.
>
> true for abstract sockets (where available), dunno what access rights
> are needed to use unix sockets residing on a fs.


  parent reply	other threads:[~2019-05-19 19:59 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09  7:10 ipc in process #1 Jeff
2019-05-10 17:57 ` Laurent Bercot
2019-05-11 11:26   ` SysV shutdown util Jeff
2019-05-11 12:56     ` Laurent Bercot
2019-05-11 12:38   ` emergency IPC with SysV message queues Jeff
2019-05-11 13:34     ` Laurent Bercot
2019-05-16 15:15       ` Jeff
2019-05-16 21:25         ` Laurent Bercot
2019-05-16 21:54           ` Oliver Schad
2019-05-19 17:54           ` Jeff
     [not found]             ` <CALZWFRJws+eDthy8MCFj4PiVdRdRpM+afda6QAbEtYA4NT8ESQ@mail.gmail.com>
2019-05-19 19:59               ` Cameron Nemo [this message]
2019-05-19 20:26             ` Jeff
2019-05-19 18:38           ` Bob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALZWFR+6XJZBz3Y8Kn6PKMPc5VopLSvUJxEZMNBdaOAYTA-89w@mail.gmail.com \
    --to=camerontnorman@gmail.com \
    --cc=supervision@list.skarnet.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).