mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] ENOFILE after reaching SEM_NSEMS_MAX
@ 2024-01-08 14:01 Patrick Rauscher
  2024-01-08 17:16 ` Markus Wichmann
  0 siblings, 1 reply; 3+ messages in thread
From: Patrick Rauscher @ 2024-01-08 14:01 UTC (permalink / raw)
  To: musl

Hello everyone,

in POSIX, the constant SEM_NSEMS_MAX is defined as 256 from some time. While
glibc nowadays ignores the limit and will yield -1 when asked for it [1], musl
currently will return ENOFILE when asking for the 256th semaphore.

This hit me (and obviously another person) when using python multiprocessing
[2]: Jan H detected, that while allocating a large number of semaphores (or
objects using at least one semaphore) works on Debian, it fails on alpine.

Thanks to psykose on the IRC the issue could be identified to the different
libc. To make this finding less ephemeral than on the IRC log, I leave a note
here. As sem_open works in the documented way, this is certainly no bug
report, rather a feature request if you will so.

Due to my lack of C-knowledge I may only standby and marvel to further
discussion, but maybe someone can come up with ideas 🙂

Thanks for your time,
Patrick

1: https://sourceware.org/legacy-ml/libc-help/2012-02/msg00003.html
2: https://stackoverflow.com/questions/77679957/python-multiprocessing-no-file-descriptors-available-error-inside-docker-alpine


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [musl] ENOFILE after reaching SEM_NSEMS_MAX
  2024-01-08 14:01 [musl] ENOFILE after reaching SEM_NSEMS_MAX Patrick Rauscher
@ 2024-01-08 17:16 ` Markus Wichmann
  2024-01-08 22:08   ` Rich Felker
  0 siblings, 1 reply; 3+ messages in thread
From: Markus Wichmann @ 2024-01-08 17:16 UTC (permalink / raw)
  To: musl; +Cc: Patrick Rauscher

Am Mon, Jan 08, 2024 at 03:01:46PM +0100 schrieb Patrick Rauscher:
> Hello everyone,
>
> in POSIX, the constant SEM_NSEMS_MAX is defined as 256 from some time. While
> glibc nowadays ignores the limit and will yield -1 when asked for it [1], musl
> currently will return ENOFILE when asking for the 256th semaphore.
>
> This hit me (and obviously another person) when using python multiprocessing
> [2]: Jan H detected, that while allocating a large number of semaphores (or
> objects using at least one semaphore) works on Debian, it fails on alpine.
>
> Thanks to psykose on the IRC the issue could be identified to the different
> libc. To make this finding less ephemeral than on the IRC log, I leave a note
> here. As sem_open works in the documented way, this is certainly no bug
> report, rather a feature request if you will so.
>
> Due to my lack of C-knowledge I may only standby and marvel to further
> discussion, but maybe someone can come up with ideas 🙂
>
> Thanks for your time,
> Patrick
>
> 1: https://sourceware.org/legacy-ml/libc-help/2012-02/msg00003.html
> 2: https://stackoverflow.com/questions/77679957/python-multiprocessing-no-file-descriptors-available-error-inside-docker-alpine
>
I should probably explain what the limit is used for at the moment.
POSIX requires that all successful calls to sem_open() with the same
name argument have to return the same pointer and increase a refcount.
Practically, this is only possible by having a list of all file-backed
semaphores. The code goes through the file mapping code normally, then
at the end checks if the semaphore was mapped already, and if so unmaps
the new copy and returns the old one.

With the limit in place, the memory for the semaphore map can be
allocated once and never returned. Iterations on the map have a
well-defined end, and there's a defined maximum run-time to all such
loops. If the limit were no longer imposed, the semaphore map would have
to become a data structure capable of growth, e.g. a list. But then
you'd get all the negative effects of having a list vs. the array we
currently have. Whatever data structure you choose, you basically always
get worse performance than with a simple array.

For the python example, I would ask whether this pattern of creating
named semaphores with random names is really all that wise. What is the
point? You have to transmit the name to the destination process somehow.
This is not really what named semaphores are for. Mayhaps a shared
memory would suit you better? Or foregoing semaphores in favor of
another primitive altogether?

Ciao,
Markus

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [musl] ENOFILE after reaching SEM_NSEMS_MAX
  2024-01-08 17:16 ` Markus Wichmann
@ 2024-01-08 22:08   ` Rich Felker
  0 siblings, 0 replies; 3+ messages in thread
From: Rich Felker @ 2024-01-08 22:08 UTC (permalink / raw)
  To: Markus Wichmann; +Cc: musl, Patrick Rauscher

On Mon, Jan 08, 2024 at 06:16:49PM +0100, Markus Wichmann wrote:
> Am Mon, Jan 08, 2024 at 03:01:46PM +0100 schrieb Patrick Rauscher:
> > Hello everyone,
> >
> > in POSIX, the constant SEM_NSEMS_MAX is defined as 256 from some time. While
> > glibc nowadays ignores the limit and will yield -1 when asked for it [1], musl
> > currently will return ENOFILE when asking for the 256th semaphore.
> >
> > This hit me (and obviously another person) when using python multiprocessing
> > [2]: Jan H detected, that while allocating a large number of semaphores (or
> > objects using at least one semaphore) works on Debian, it fails on alpine.
> >
> > Thanks to psykose on the IRC the issue could be identified to the different
> > libc. To make this finding less ephemeral than on the IRC log, I leave a note
> > here. As sem_open works in the documented way, this is certainly no bug
> > report, rather a feature request if you will so.
> >
> > Due to my lack of C-knowledge I may only standby and marvel to further
> > discussion, but maybe someone can come up with ideas 🙂
> >
> > Thanks for your time,
> > Patrick
> >
> > 1: https://sourceware.org/legacy-ml/libc-help/2012-02/msg00003.html
> > 2: https://stackoverflow.com/questions/77679957/python-multiprocessing-no-file-descriptors-available-error-inside-docker-alpine
> >
> I should probably explain what the limit is used for at the moment.
> POSIX requires that all successful calls to sem_open() with the same
> name argument have to return the same pointer and increase a refcount.
> Practically, this is only possible by having a list of all file-backed
> semaphores. The code goes through the file mapping code normally, then
> at the end checks if the semaphore was mapped already, and if so unmaps
> the new copy and returns the old one.
> 
> With the limit in place, the memory for the semaphore map can be
> allocated once and never returned. Iterations on the map have a
> well-defined end, and there's a defined maximum run-time to all such
> loops. If the limit were no longer imposed, the semaphore map would have
> to become a data structure capable of growth, e.g. a list. But then
> you'd get all the negative effects of having a list vs. the array we
> currently have. Whatever data structure you choose, you basically always
> get worse performance than with a simple array.
> 
> For the python example, I would ask whether this pattern of creating
> named semaphores with random names is really all that wise. What is the
> point? You have to transmit the name to the destination process somehow.
> This is not really what named semaphores are for. Mayhaps a shared
> memory would suit you better? Or foregoing semaphores in favor of
> another primitive altogether?

Indeed, while I don't yet fully understand what they're trying to do,
named semaphores do not sound like a suitable tool here, where there
seems to be one named sem per synchronization primitive or something.

Named semaphores are *incredibly* inefficient. On a system with
reasonable normal page size of 4k, named sems have 25500% overhead vs
an anon sem. On a system with large 64k pages, that jumps to 409500%
overhead. Moreover, Linux imposes limits on the number of mmaps a
process has (default limit is 64k, after consolidation of adjacent
anon maps with same permissions), and each map also contributes to
open file limits, etc. Even just using the musl limit of 256 named
sems (which is all that POSIX guarantees you; requiring more is
nonportable), you're wasting 1M of memory on a 4k-page system (16M on
a 64k-page system) on storage for the semaphores, and probably that
much or more again for all of the inodes, vmas, and other kernel
bookkeeping. At thousands of named sems, it gets far worse.

I'm not familiar enough with the problem to know what the right thing
is, but if these are all related processes, they should probably be
allocating their semaphores as anon ones inside one or more
shared-memory maps. This would have far lower overhead, and would not
run into the limit hit here.

Rich

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-01-08 22:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-08 14:01 [musl] ENOFILE after reaching SEM_NSEMS_MAX Patrick Rauscher
2024-01-08 17:16 ` Markus Wichmann
2024-01-08 22:08   ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).