mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: [musl] sem_post() can miss waiters
Date: Mon, 21 Nov 2022 19:09:59 -0500	[thread overview]
Message-ID: <20221122000958.GQ29905@brightrain.aerifal.cx> (raw)
In-Reply-To: <d352f85c0aab78a5777b0857a12e1de3@ispras.ru>

On Mon, Nov 21, 2022 at 08:14:50AM +0300, Alexey Izbyshev wrote:
> Hi,
> 
> While reading musl's POSIX semaphore implementation I realized that
> its accounting of waiters is incorrect (after commit
> 88c4e720317845a8e01aee03f142ba82674cd23d) due to combination of the
> following factors:
> 
> * sem_post relies on a potentially stale waiters count when deciding
> whether to wake a waiter.
> * Presence of waiters is not reflected in the semaphore value when
> the semaphore is unlocked.
> 
> Here is an example scenario with 4 threads:
> 
> * initial state
>   - val = 1, waiters = 0
> * T1
>   - enters sem_post
>   - reads val = 1, waiters = 0, is preempted
>   - val = 1, waiters = 0
> * T2
>   - sem_wait
>   - another sem_wait (blocks)
>   - val = -1, waiters = 1
> * T3
>   - sem_wait (blocks)
>   - val = -1, waiters = 2
> * T4
>   - sem_post (wakes T2)
>   - val = 1, waiters = 2
> * T1
>   - continues in sem_post
>   - a_cas(1, 2) succeeds
>   - T3 is NOT woken because sem_post thinks there are no waiters
>   - val = 2, waiters = 1
> * T2
>   - wakes and completes sem_wait
>   - val = 1, waiters = 1
> 
> So in the end T3 remains blocked despite that the semaphore is unlocked.
> 
> Here is two ideas how this can be fixed without reintroducing the
> problem with accessing a potentially destroyed semaphore after it's
> unlocked.
> 
> * Idea 1
>   - use a proper WAITERS_BIT in the semaphore state instead of just
> a single value and preserve it in sem_post by default
>   - in sem_post, when we notice that waiters count is zero but
> WAITERS_BIT is set, reset WAITERS_BIT, but wake all potential
> waiters afterwards.
> 
> This ensures that any new waiters that could arrive after we read
> waiters count are taken into account (and they will proceed to lock
> the semaphore and potentially set WAITERS_BIT again after they are
> woken).
> 
> The sem_post body would look like this (the full patch is attached):
> 
> int val, new, priv = sem->__val[2], sync = 0;
> do {
>     val = sem->__val[0];
>     waiters = sem->__val[1];
>     if ((val & SEM_VALUE_MAX) == SEM_VALUE_MAX) {
>         errno = EOVERFLOW;
>         return -1;
>     }
>     new = val + 1;
>     if (!waiters)
>         new &= ~0x80000000;
> } while (a_cas(sem->__val, val, new) != val);
> if (val<0) __wake(sem->__val, waiters ? 1 : -1, priv);
> 
> The main downside of this approach is that a FUTEX_WAKE system call
> will be performed each time the semaphore transitions from "maybe
> have waiters" to "no waiters" state, and in practice it will be
> useless in most cases.
> 
> * Idea 2
>   - use a proper WAITERS_BIT as above
>   - also add a new DETECT_NEW_WAITERS_BIT
>   - in sem_post, when we notice that waiters count is zero,
> WAITERS_BIT is set and DETECT_NEW_WAITERS_BIT is not set, atomically
> transition into "waiters detection" state (DETECT_NEW_WAITERS_BIT is
> set, WAITERS_BIT is unset) *without unlocking the semaphore*. Then
> recheck waiters count again, and if it's still zero, try to
> atomically transition to "no waiters" state (both bits are unset)
> while also unlocking the semaphore. If this succeeds, we know that
> no new waiters arrived before we unlocked the semaphore and hence no
> wake up is needed. Regardless of whether we succeeded, we always
> leave "waiters detection" state before exiting the CAS loop.
>   - in sem_post, wake a single waiter if at least one of WAITERS_BIT
> or DETECT_NEW_WAITERS_BIT is set. This ensures no waiters are missed
> in sem_post calls racing with the sem_post currently responsible for
> waiters detection.
> 
> The sem_post body would look like this (the full patch is attached):
> 
> int val, new, priv = sem->__val[2], sync = 0;
> for (;;) {
>     int cnt;
>     val = sem->__val[0];
>     cnt = val & SEM_VALUE_MAX;
>     if (cnt < SEM_VALUE_MAX) {
>         if (!sync && val < 0 && !(val & 0x40000000) && !sem->__val[1]) {
>             new = cnt | 0x40000000;
>             if (a_cas(sem->__val, val, new) != val)
>                 continue;
>             val = new;
>             sync = 1;
>         }
>         new = val + 1;
>     } else {
>         new = val;
>     }
>     if (sync) {
>         if (sem->__val[1])
>             new |= 0x80000000;
>         new &= ~0x40000000;
>     }
>     if (a_cas(sem->__val, val, new) == val)
>         break;
> }
> if ((val & SEM_VALUE_MAX) == SEM_VALUE_MAX) {
>     errno = EOVERFLOW;
>     return -1;
> }
> if (new < 0 || (new & 0x40000000)) __wake(sem->__val, 1, priv);
> 
> Compared to the first approach, there is just an extra a_cas instead
> of a system call, though we lose one bit in SEM_VALUE_MAX.
> 
> Hope this helps,

Thanks for the detailed analysis and work on fixes. I'm not sure how
this was overlooked at the time of design.

If I understand correctly, the cost of the first option is generally
an extra "last" broadcast wake that shouldn't be needed. Is that
right?

For example, if sem starts with count 0 and thread A calls wait, then
thread B calls post twice, both posts make a syscall even though only
the first one should have.

What if you instead perform the broadcast wake when the observed
waiters count is <=1 rather than ==0? This should have no cost in the
common case, but in the race case, I think it forces any hiding
(just-arrived) extra waiters to wake, fail, and re-publish their
waiting status to the waiters bit.

Regarding your second solution, I think it's more elegant and
efficient and would be preferable if we were doing this from scratch,
but changing SEM_VALUE_MAX is arguably an ABI change we should not
make.

I think there's another really stupid solution too that I would even
consider, partly motivated by the fact that, with long-lived
process-shared semaphores, the waiters count can become stale if
waiters are killed. (Note: semaphores aren't required to be robust
against this, but it's ugly that they're not.) This solution is just
to keep the current code, but drop the waiters count entirely, and use
broadcast wakes for all wakes. Then, any still-live waiters are
*always* responsible for re-asserting their waiting status when all
but one fail to acquire the semaphore after a wake. Of course this is
a thundering herd, which is arguably something we should not want, but
we accept it for correctness in several other places like condvars...

Rich

  reply	other threads:[~2022-11-22  0:10 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-21  5:14 Alexey Izbyshev
2022-11-22  0:09 ` Rich Felker [this message]
2022-11-22  5:41   ` A. Wilcox
2022-11-22  5:41   ` Alexey Izbyshev
2022-12-14  1:48     ` Rich Felker
2022-12-14 10:29       ` Alexey Izbyshev
2023-02-22 18:12         ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221122000958.GQ29905@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).