From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.3 required=5.0 tests=MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 20468 invoked from network); 8 Jan 2024 22:08:35 -0000 Received: from second.openwall.net (193.110.157.125) by inbox.vuxu.org with ESMTPUTF8; 8 Jan 2024 22:08:35 -0000 Received: (qmail 14330 invoked by uid 550); 8 Jan 2024 22:07:04 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 14295 invoked from network); 8 Jan 2024 22:07:04 -0000 Date: Mon, 8 Jan 2024 17:08:35 -0500 From: Rich Felker To: Markus Wichmann Cc: musl@lists.openwall.com, Patrick Rauscher Message-ID: <20240108220834.GN4163@brightrain.aerifal.cx> References: <2dcdd7436bd07e0b020644fac1b3a06256db8673.camel@prauscher.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [musl] ENOFILE after reaching SEM_NSEMS_MAX On Mon, Jan 08, 2024 at 06:16:49PM +0100, Markus Wichmann wrote: > Am Mon, Jan 08, 2024 at 03:01:46PM +0100 schrieb Patrick Rauscher: > > Hello everyone, > > > > in POSIX, the constant SEM_NSEMS_MAX is defined as 256 from some time. While > > glibc nowadays ignores the limit and will yield -1 when asked for it [1], musl > > currently will return ENOFILE when asking for the 256th semaphore. > > > > This hit me (and obviously another person) when using python multiprocessing > > [2]: Jan H detected, that while allocating a large number of semaphores (or > > objects using at least one semaphore) works on Debian, it fails on alpine. > > > > Thanks to psykose on the IRC the issue could be identified to the different > > libc. To make this finding less ephemeral than on the IRC log, I leave a note > > here. As sem_open works in the documented way, this is certainly no bug > > report, rather a feature request if you will so. > > > > Due to my lack of C-knowledge I may only standby and marvel to further > > discussion, but maybe someone can come up with ideas 🙂 > > > > Thanks for your time, > > Patrick > > > > 1: https://sourceware.org/legacy-ml/libc-help/2012-02/msg00003.html > > 2: https://stackoverflow.com/questions/77679957/python-multiprocessing-no-file-descriptors-available-error-inside-docker-alpine > > > I should probably explain what the limit is used for at the moment. > POSIX requires that all successful calls to sem_open() with the same > name argument have to return the same pointer and increase a refcount. > Practically, this is only possible by having a list of all file-backed > semaphores. The code goes through the file mapping code normally, then > at the end checks if the semaphore was mapped already, and if so unmaps > the new copy and returns the old one. > > With the limit in place, the memory for the semaphore map can be > allocated once and never returned. Iterations on the map have a > well-defined end, and there's a defined maximum run-time to all such > loops. If the limit were no longer imposed, the semaphore map would have > to become a data structure capable of growth, e.g. a list. But then > you'd get all the negative effects of having a list vs. the array we > currently have. Whatever data structure you choose, you basically always > get worse performance than with a simple array. > > For the python example, I would ask whether this pattern of creating > named semaphores with random names is really all that wise. What is the > point? You have to transmit the name to the destination process somehow. > This is not really what named semaphores are for. Mayhaps a shared > memory would suit you better? Or foregoing semaphores in favor of > another primitive altogether? Indeed, while I don't yet fully understand what they're trying to do, named semaphores do not sound like a suitable tool here, where there seems to be one named sem per synchronization primitive or something. Named semaphores are *incredibly* inefficient. On a system with reasonable normal page size of 4k, named sems have 25500% overhead vs an anon sem. On a system with large 64k pages, that jumps to 409500% overhead. Moreover, Linux imposes limits on the number of mmaps a process has (default limit is 64k, after consolidation of adjacent anon maps with same permissions), and each map also contributes to open file limits, etc. Even just using the musl limit of 256 named sems (which is all that POSIX guarantees you; requiring more is nonportable), you're wasting 1M of memory on a 4k-page system (16M on a 64k-page system) on storage for the semaphores, and probably that much or more again for all of the inodes, vmas, and other kernel bookkeeping. At thousands of named sems, it gets far worse. I'm not familiar enough with the problem to know what the right thing is, but if these are all related processes, they should probably be allocating their semaphores as anon ones inside one or more shared-memory maps. This would have far lower overhead, and would not run into the limit hit here. Rich