mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: [musl] Getting rid of vmlock
Date: Wed, 11 Nov 2020 15:41:20 -0500	[thread overview]
Message-ID: <20201111204119.GN534@brightrain.aerifal.cx> (raw)

The vmlock mechanism is a horrible hack to work around Linux's robust
mutex system being broken; it inherently has a race of the following
form:

	thread A		thread B
1.	put M in pending
2.	unlock M
3.				lock M
4.				unlock M
5.				destroy M
6.				unmap M's memory
7.				mmap something new at same address
8.	remove M from pending

whereby, if the process is async terminated (e.g. by SIGKILL) right
before the line 8 (removing M from pending), the kernel will clobber
unrelated memory. Normally this is harmless since the process is
dying, but it's possible this unrelated memory is a new MAP_SHARED
PROT_WRITE file mapping, and now you have file corruption.

musl's vmlock mechanism avoids this by taking a sort of lock (just a
usage counter) while a robust list pending slot is in use, and
disallowing munmap or mmap/mremap with MAP_FIXED until the counter
reaches zero. This has the unfortunate consequence of making the mmap
family non-AS-safe. Thus I'd really like to get rid of it.

Right now we're preventing the problem by forcing the munmap operation
(line 6) to wait for the delisting from pending (ling 8). But we could
instead just delay the lock (line 3) of a process-shared robust mutex
until all pending operations in the current process finish. Since the
window (between line 2 and 8) only exists until pthread_mutex_unlock
returns in thread A, there is no way for thread B to observe M being
ready to destroy/unmap/whatever without a lock or trylock operation on
M. So this seems to work.

Both before and after such a change, however, we have a nasty problem.
If thread A is interrupted by a signal handler, or has low priority
(think PI where the unlock drops priority), between 2 and 8, it's
possible for the pending slot removal to be delayed arbitrarily. This
is really bad.

I think the problem is solvable by not just keeping a count of threads
with pending robust list slots, but making them register in a list (or
simply using the existing global thread list). Then when thread B
takes the lock (line 3), rather than waiting on the pending count to
reach zero, if it sees a nonzero count, it can walk the list and
*perform the removal itself* of M from any pending slot it finds M in.

The vmlock is also used (rather gratuitously, in the same way) for
process-shared barriers. These can be fixed to perform __vm_wait
before returning, rather than imposing that it happen later at unmap
time, and then the vmlock can be replaced with a "threads exiting
pshared barrier" lock that's functionally equivalent to the old
vmlock.


Rich

             reply	other threads:[~2020-11-11 20:41 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-11 20:41 Rich Felker [this message]
2020-11-12  0:00 ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201111204119.GN534@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).