mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: Use-after-free in __unlock
Date: Thu, 1 Jun 2017 11:57:53 -0400	[thread overview]
Message-ID: <20170601155753.GK1627@brightrain.aerifal.cx> (raw)
In-Reply-To: <CAFnh-mfmsFLbjqJ4ORjq6Ov+dpk2uwJSZzyKiL8yFmmG4biVBw@mail.gmail.com>

On Thu, Jun 01, 2017 at 10:32:37AM -0500, Alex Crichton wrote:
> Hello! I personally work on the rust-lang/rust compiler [1] and one of the
> platforms we run CI for is x86_64 Linux with musl as a libc. We've got a
> longstanding issue [2] of spurious segfaults in musl binaries on our CI, and
> one of our contributors managed to get a stack trace and I think we've tracked
> down the bug!
> 
> I believe there's a use-after-free in the `__unlock` function when used with
> threading in musl (still present on master as far as I can tell). The source of
> the unlock function looks like:
> 
>     void __unlock(volatile int *l)
>     {
>    if (l[0]) {
>    a_store(l, 0);
>    if (l[1]) __wake(l, 1, 1);
>    }
>     }
> 
> The problem I believe I'm seeing is that after `a_store` finishes, the memory
> behind the lock, `l`, is deallocated. This means that the later access of
> `l[1]` causes a use after free, and I believe the spurious segfaults we're
> seeing on our CI. The reproduction we've got is the sequence of events:
> 
> * Thread A starts thread B
> * Thread A calls `pthread_detach` on the return value of `pthread_create` for
>   thread B.
> * The implementation of `pthread_detach` does its business and eventually calls
>   `__unlock(t->exitlock)`.
> * Meanwhile, thread B exits.
> * Thread B sees that `t->exitlock` is unlocked and deallocates the `pthread_t`
>   memory.
> * Thread a comes back to access `l[1]` (what I think is `t->exitlock[1]` and
>   segfaults as this memory has been freed.

Thanks for finding and reporting this. Indeed, __lock and __unlock are
not made safe for use on dynamic-lifetime objects, and I was wrongly
thinking they were only used for static-lifetime ones (various
libc-internal locks).

For infrequently-used locks, which these seem to be, I see no reason
to optimize for contention by having a separate waiter count; simply
having a single atomic int whose states are "unlocked", "locked with
no waiters", and "locked maybe with waiters" is a simple solution and
slightly shrinks the relevant structures anyway. It's possible that
this is the right solution for all places where __lock and __unlock
are used, but we should probably do a review and see which ones might
reasonably be subject to high contention (where spurious FUTEX_WAKE is
very costly).

Rich


  parent reply	other threads:[~2017-06-01 15:57 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-01 15:32 Alex Crichton
2017-06-01 15:42 ` Alexander Monakov
2017-06-01 15:57 ` Rich Felker [this message]
2017-06-01 16:16   ` Rich Felker
2017-06-02  5:48   ` Joakim Sindholt
2017-06-02 10:11     ` Jens Gustedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170601155753.GK1627@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).