From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.io/gmane.comp.sysutils.supervision.general/2580 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Jeff Newsgroups: gmane.comp.sysutils.supervision.general Subject: emergency IPC with SysV message queues Date: Sat, 11 May 2019 14:38:25 +0200 Message-ID: <7911341557578305@sas1-23a37bc8251c.qloud-c.yandex.net> References: <20190509071019.GE4017@panda> Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="108290"; mail-complaints-to="usenet@blaine.gmane.org" To: supervision@list.skarnet.org Original-X-From: supervision-return-2170-gcsg-supervision=m.gmane.org@list.skarnet.org Sat May 11 14:38:34 2019 Return-path: Envelope-to: gcsg-supervision@m.gmane.org Original-Received: from alyss.skarnet.org ([95.142.172.232]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1hPRGo-000RxT-Fl for gcsg-supervision@m.gmane.org; Sat, 11 May 2019 14:38:30 +0200 Original-Received: (qmail 8024 invoked by uid 89); 11 May 2019 12:38:55 -0000 Mailing-List: contact supervision-help@list.skarnet.org; run by ezmlm Original-Sender: Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Original-Received: (qmail 8017 invoked from network); 11 May 2019 12:38:55 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.com; s=mail; t=1557578305; bh=td19BotG02xfNE8cavohLY7HpnVqQDLwMMZp/oFQhLs=; h=Message-Id:Subject:In-Reply-To:Date:References:To:From; b=lfB1VnKgbxrvDC+j3f0knhrHqWjyQ5og22QNTf9lk25L1L/ExP1Op4AoGivgyGOH7 qZenoR9g6DImkGpDLrsP+CFeyshsBrOeqt4cHr2iRjjhar3rlmvouY9UmImvM1DYwq l/mmw2b4k/rsWqcTcw5QjgVWy7Hjk58yQZn0nLpo= Authentication-Results: mxback21o.mail.yandex.net; dkim=pass header.i=@yandex.com In-Reply-To: X-Mailer: Yamail [ http://yandex.ru ] 5.0 Xref: news.gmane.org gmane.comp.sysutils.supervision.general:2580 Archived-At: 10.05.2019, 20:03, "Laurent Bercot" : > Have you tried working with SysV message queues before recommending > them ? yes, i had the code in an init once, but i never completed that init. but dealing with SysV msg queues was not such a big deal from the code side. i used it merely as an additional emergency ipc method when other ipc methods become impossible. though actually signals were sufficient in case of emergency. > Because my recommendation would be the exact opposite. Don't > ever use SysV message queues if you can avoid it. The API is very > clumsy, and does not mix with event loops, so it constrains your > programming model immensely - you're practically forced to use threads. > And that's a can of worms you certainly don't want to open in pid 1. that is wrong. just read the msg queue when a signal arrives (say SIGIO for example). catch that signal via selfpiping and there you are, no need to use threads. we were talking about process #1 anyway, so some msg queue restrictions do not apply here (like finding the ipc id), if you need to send replies back to clients set up a second queue for the replies (with ipc id 2, we are process #1 and free to grab ANY possible ipc id). if those clients wait for their results and block you could also signal them after writing the reply to the second queue (that means clients send their pid along with their requests, usually the message type field is (ab ?)used to pass that information). > Abstract sockets are cool - the only issue is that they're not > portable, which would limit your init system to Linux only. well, we are talking about Linux here since that is where that obscene systemd monstrosity rampages for quite a while now. > If you're going for a minimal init, it's a shame not to make it > portable. in my case it will be portable and will work solely signal driven. but i do not see to much need for other unices to change their init systems, especially not for BSDs. of course BSD init could be improved upon but it just works and it is rather easy to understand how. they did not follow the SysV runlevel BS and speeding their inits up will mostly mean to speed up /etc/rc and friends. it also has respawn capabilities via /etc/ttys to (re)start a supervisor from there. though i agree it is quite big for not doing too much ... i hope the FreeBSD (i do not think the other BSDs even consider such a step) team will not follow the BS introduced elsewhere (OpenBSD will probably not), a danger for FreeBSD was some of their users' demand to port that launchd joke over from macOS or come up with something even worse (more in the direction of systemd). > Really, the argument for an ipc mechanism that does not require a > rw fs is a weak one. not at all. > Every modern Unix can mount a RAM filesystem nowadays, that is a poor excuse, you wanted to portable, right ? > and it is basically essential to do so if you want early logs. use the console device for your early logs, that requires console access though ... > Having no logs at all until you can mount a rw fs is a big > no-no, and being unable to log what your init system does *at all*, > unless you're willing to store everything in RAM until a rw fs > comes up, is worse. An early tmpfs technically stores things in RAM > too, but is much, *much* cleaner, and doesn't need ad-hoc code or > weird convolutions to make it work. > Just use an early tmpfs and use whatever IPC you want that uses a > rendez-vous point in that tmpfs to communicate with process 1. > But just say no to message queues. that is just your opinion since your solution works that way. other solutions are of course possible, eg using msg queues just as a backup ipc method since we can exploit being process #1 here. in the case of Linux i do not see any reason not to use abstract unix sockets as preferred ipc method in process #1 (except the kernel does not support sockets which is rare these days, right ? on BSD AFAIK the kernel also supports sockets since socketpairs were said to be used there to implement pipes.) for BSD use normal unix sockets (this is save to do even on OpenBSD since we have emergency ipc via signals and SysV msg queues), on Solaris SysV streams (and possibly doors) might be used as we also have said backup mechanism in reserve. but to be honest: a simple reliable init implementation should be solely signal driven. i was just thinking about more complex integrated inits that have higher ipc demands (dunno what systemd does :-). you can tell a reliable init by the way it does ipc. many inits do not get that right and rely on ipc mechanisms that require rw fs access. if mounting the corresponding fs rw fails they are pretty hosed since their ipc does not work anymore and their authors were too clueless to just react to signals in case of emergency and abused signal numbers to reread their config or other needless BS. i top my claims even more: you can tell a reliable init by not even using malloc directly nor indirectly (hello opendir(3) !!! ;-). :PP but more on that topic in another post since this one has become quite long.