From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.3 required=5.0 tests=MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 24303 invoked from network); 11 Nov 2020 20:41:37 -0000 Received: from mother.openwall.net (195.42.179.200) by inbox.vuxu.org with ESMTPUTF8; 11 Nov 2020 20:41:37 -0000 Received: (qmail 5432 invoked by uid 550); 11 Nov 2020 20:41:33 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 5401 invoked from network); 11 Nov 2020 20:41:32 -0000 Date: Wed, 11 Nov 2020 15:41:20 -0500 From: Rich Felker To: musl@lists.openwall.com Message-ID: <20201111204119.GN534@brightrain.aerifal.cx> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) Subject: [musl] Getting rid of vmlock The vmlock mechanism is a horrible hack to work around Linux's robust mutex system being broken; it inherently has a race of the following form: thread A thread B 1. put M in pending 2. unlock M 3. lock M 4. unlock M 5. destroy M 6. unmap M's memory 7. mmap something new at same address 8. remove M from pending whereby, if the process is async terminated (e.g. by SIGKILL) right before the line 8 (removing M from pending), the kernel will clobber unrelated memory. Normally this is harmless since the process is dying, but it's possible this unrelated memory is a new MAP_SHARED PROT_WRITE file mapping, and now you have file corruption. musl's vmlock mechanism avoids this by taking a sort of lock (just a usage counter) while a robust list pending slot is in use, and disallowing munmap or mmap/mremap with MAP_FIXED until the counter reaches zero. This has the unfortunate consequence of making the mmap family non-AS-safe. Thus I'd really like to get rid of it. Right now we're preventing the problem by forcing the munmap operation (line 6) to wait for the delisting from pending (ling 8). But we could instead just delay the lock (line 3) of a process-shared robust mutex until all pending operations in the current process finish. Since the window (between line 2 and 8) only exists until pthread_mutex_unlock returns in thread A, there is no way for thread B to observe M being ready to destroy/unmap/whatever without a lock or trylock operation on M. So this seems to work. Both before and after such a change, however, we have a nasty problem. If thread A is interrupted by a signal handler, or has low priority (think PI where the unlock drops priority), between 2 and 8, it's possible for the pending slot removal to be delayed arbitrarily. This is really bad. I think the problem is solvable by not just keeping a count of threads with pending robust list slots, but making them register in a list (or simply using the existing global thread list). Then when thread B takes the lock (line 3), rather than waiting on the pending count to reach zero, if it sees a nonzero count, it can walk the list and *perform the removal itself* of M from any pending slot it finds M in. The vmlock is also used (rather gratuitously, in the same way) for process-shared barriers. These can be fixed to perform __vm_wait before returning, rather than imposing that it happen later at unmap time, and then the vmlock can be replaced with a "threads exiting pshared barrier" lock that's functionally equivalent to the old vmlock. Rich