From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=MAILING_LIST_MULTI, RCVD_IN_MSPIKE_H2 autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 16019 invoked from network); 14 Dec 2022 01:49:17 -0000 Received: from second.openwall.net (193.110.157.125) by inbox.vuxu.org with ESMTPUTF8; 14 Dec 2022 01:49:17 -0000 Received: (qmail 20215 invoked by uid 550); 14 Dec 2022 01:49:12 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 20180 invoked from network); 14 Dec 2022 01:49:11 -0000 Date: Tue, 13 Dec 2022 20:48:59 -0500 From: Rich Felker To: musl@lists.openwall.com Message-ID: <20221214014857.GA15716@brightrain.aerifal.cx> References: <20221122000958.GQ29905@brightrain.aerifal.cx> <87064ce2eef96ffbced93da7812d5358@ispras.ru> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="gKMricLos+KVdGMg" Content-Disposition: inline In-Reply-To: <87064ce2eef96ffbced93da7812d5358@ispras.ru> User-Agent: Mutt/1.5.21 (2010-09-15) Subject: Re: [musl] sem_post() can miss waiters --gKMricLos+KVdGMg Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Tue, Nov 22, 2022 at 08:41:44AM +0300, Alexey Izbyshev wrote: > On 2022-11-22 03:09, Rich Felker wrote: > >On Mon, Nov 21, 2022 at 08:14:50AM +0300, Alexey Izbyshev wrote: > >>Hi, > >> > >>While reading musl's POSIX semaphore implementation I realized that > >>its accounting of waiters is incorrect (after commit > >>88c4e720317845a8e01aee03f142ba82674cd23d) due to combination of the > >>following factors: > >> > >>* sem_post relies on a potentially stale waiters count when deciding > >>whether to wake a waiter. > >>* Presence of waiters is not reflected in the semaphore value when > >>the semaphore is unlocked. > >> > >>Here is an example scenario with 4 threads: > >> > >>* initial state > >> - val = 1, waiters = 0 > >>* T1 > >> - enters sem_post > >> - reads val = 1, waiters = 0, is preempted > >> - val = 1, waiters = 0 > >>* T2 > >> - sem_wait > >> - another sem_wait (blocks) > >> - val = -1, waiters = 1 > >>* T3 > >> - sem_wait (blocks) > >> - val = -1, waiters = 2 > >>* T4 > >> - sem_post (wakes T2) > >> - val = 1, waiters = 2 > >>* T1 > >> - continues in sem_post > >> - a_cas(1, 2) succeeds > >> - T3 is NOT woken because sem_post thinks there are no waiters > >> - val = 2, waiters = 1 > >>* T2 > >> - wakes and completes sem_wait > >> - val = 1, waiters = 1 > >> > >>So in the end T3 remains blocked despite that the semaphore is > >>unlocked. > >> > >>Here is two ideas how this can be fixed without reintroducing the > >>problem with accessing a potentially destroyed semaphore after it's > >>unlocked. > >> > >>* Idea 1 > >> - use a proper WAITERS_BIT in the semaphore state instead of just > >>a single value and preserve it in sem_post by default > >> - in sem_post, when we notice that waiters count is zero but > >>WAITERS_BIT is set, reset WAITERS_BIT, but wake all potential > >>waiters afterwards. > >> > >>This ensures that any new waiters that could arrive after we read > >>waiters count are taken into account (and they will proceed to lock > >>the semaphore and potentially set WAITERS_BIT again after they are > >>woken). > >> > >>The sem_post body would look like this (the full patch is attached): > >> > >>int val, new, priv = sem->__val[2], sync = 0; > >>do { > >> val = sem->__val[0]; > >> waiters = sem->__val[1]; > >> if ((val & SEM_VALUE_MAX) == SEM_VALUE_MAX) { > >> errno = EOVERFLOW; > >> return -1; > >> } > >> new = val + 1; > >> if (!waiters) > >> new &= ~0x80000000; > >>} while (a_cas(sem->__val, val, new) != val); > >>if (val<0) __wake(sem->__val, waiters ? 1 : -1, priv); > >> > >>The main downside of this approach is that a FUTEX_WAKE system call > >>will be performed each time the semaphore transitions from "maybe > >>have waiters" to "no waiters" state, and in practice it will be > >>useless in most cases. > >> > >>* Idea 2 > >> - use a proper WAITERS_BIT as above > >> - also add a new DETECT_NEW_WAITERS_BIT > >> - in sem_post, when we notice that waiters count is zero, > >>WAITERS_BIT is set and DETECT_NEW_WAITERS_BIT is not set, atomically > >>transition into "waiters detection" state (DETECT_NEW_WAITERS_BIT is > >>set, WAITERS_BIT is unset) *without unlocking the semaphore*. Then > >>recheck waiters count again, and if it's still zero, try to > >>atomically transition to "no waiters" state (both bits are unset) > >>while also unlocking the semaphore. If this succeeds, we know that > >>no new waiters arrived before we unlocked the semaphore and hence no > >>wake up is needed. Regardless of whether we succeeded, we always > >>leave "waiters detection" state before exiting the CAS loop. > >> - in sem_post, wake a single waiter if at least one of WAITERS_BIT > >>or DETECT_NEW_WAITERS_BIT is set. This ensures no waiters are missed > >>in sem_post calls racing with the sem_post currently responsible for > >>waiters detection. > >> > >>The sem_post body would look like this (the full patch is attached): > >> > >>int val, new, priv = sem->__val[2], sync = 0; > >>for (;;) { > >> int cnt; > >> val = sem->__val[0]; > >> cnt = val & SEM_VALUE_MAX; > >> if (cnt < SEM_VALUE_MAX) { > >> if (!sync && val < 0 && !(val & 0x40000000) && > >>!sem->__val[1]) { > >> new = cnt | 0x40000000; > >> if (a_cas(sem->__val, val, new) != val) > >> continue; > >> val = new; > >> sync = 1; > >> } > >> new = val + 1; > >> } else { > >> new = val; > >> } > >> if (sync) { > >> if (sem->__val[1]) > >> new |= 0x80000000; > >> new &= ~0x40000000; > >> } > >> if (a_cas(sem->__val, val, new) == val) > >> break; > >>} > >>if ((val & SEM_VALUE_MAX) == SEM_VALUE_MAX) { > >> errno = EOVERFLOW; > >> return -1; > >>} > >>if (new < 0 || (new & 0x40000000)) __wake(sem->__val, 1, priv); > >> > >>Compared to the first approach, there is just an extra a_cas instead > >>of a system call, though we lose one bit in SEM_VALUE_MAX. > >> > >>Hope this helps, > > > >Thanks for the detailed analysis and work on fixes. I'm not sure how > >this was overlooked at the time of design. > > > >If I understand correctly, the cost of the first option is generally > >an extra "last" broadcast wake that shouldn't be needed. Is that > >right? > > > >For example, if sem starts with count 0 and thread A calls wait, then > >thread B calls post twice, both posts make a syscall even though only > >the first one should have. > > > Yes, exactly. > > >What if you instead perform the broadcast wake when the observed > >waiters count is <=1 rather than ==0? This should have no cost in the > >common case, but in the race case, I think it forces any hiding > >(just-arrived) extra waiters to wake, fail, and re-publish their > >waiting status to the waiters bit. > > > Indeed, I think this solves the overhead issue quite nicely, thanks. > So sem_post wake up logic would basically boil down to this: > > * WAITERS_BIT is set and waiters > 1: don't reset WAITERS_BIT since > it's likely that some waiters will remain (and it's always fine to > err on the side of preserving the WAITERS_BIT); wake a single > waiter. > > * WAITERS_BIT is set and waiters <= 1: reset WAITERS_BIT and wake > all waiters. In non-racy cases only a single waiter will be woken. > > * WAITERS_BIT is unset: don't wake anybody. Even if there are some > waiters, another sem_post (that reset the WAITERS_BIT) is > responsible for waking them. > > In code: > > int val, new, waiters, priv = sem->__val[2]; > do { > val = sem->__val[0]; > waiters = sem->__val[1]; > if ((val & SEM_VALUE_MAX) == SEM_VALUE_MAX) { > errno = EOVERFLOW; > return -1; > } > new = val + 1; > if (waiters <= 1) > new &= ~0x80000000; > } while (a_cas(sem->__val, val, new) != val); > if (val<0) __wake(sem->__val, waiters <= 1 ? -1 : 1, priv); Yes, this looks good to me. How is the attached patch? > >Regarding your second solution, I think it's more elegant and > >efficient and would be preferable if we were doing this from scratch, > >but changing SEM_VALUE_MAX is arguably an ABI change we should not > >make. > > > With your suggested improvement, the first solution is probably even > more efficient in common cases, assuming there are no hidden costs > in the kernel for waking a single waiter with "wake all" operation > instead of "wake single". Agreed. > >I think there's another really stupid solution too that I would even > >consider, partly motivated by the fact that, with long-lived > >process-shared semaphores, the waiters count can become stale if > >waiters are killed. (Note: semaphores aren't required to be robust > >against this, but it's ugly that they're not.) This solution is just > >to keep the current code, but drop the waiters count entirely, and use > >broadcast wakes for all wakes. Then, any still-live waiters are > >*always* responsible for re-asserting their waiting status when all > >but one fail to acquire the semaphore after a wake. Of course this is > >a thundering herd, which is arguably something we should not want, but > >we accept it for correctness in several other places like condvars... > > > I'm not sure that the kind of "partial robustness" that could be > achieved by always waking all waiters is worth punishing normal > cases. Also, I don't think waiters count being stale is an issue per > se because it can be wrong only in one way (greater than the real > count of waiters), assuming you don't mean overflow (but overflow > could be handled by simply pinning it at INT_MAX permanently). But > many other issues are possible if we allow killing processes at > arbitrary points, including sem_post not sending a wake up > notification after updating the semaphore value, or sem_wait > consuming a notification but not updating the value. Granted, some > of these issues may "self-heal" on a future sem_wait/sem_post (in > the sense that the semaphore will leave a forbidden state), but they > might still interfere with the program shutdown logic (e.g. if the > program expects to claim the semaphore N times at the end without > being aware that some consumers are dead), and, of course, with any > logic outside of semaphore manipulation. So it seems to me that > either the program already watches for death of processes it cares > about, or it's not even clear that allowing further progress (due to > a broadcast) is always more desirable than blocking. Indeed, this seems like it's just a bad direction to go for the reasons you described. One thing I'd like to note here though, but not act on at this time: When we were designing the semaphore long ago, there was a vague proposal to make the post operation responsible for removing waiter counts, rather than the waiter. It was thrown out because it didn't seem to work. But if we ever did have a reason to want to do this, I think it's possible now, since it's essentially just a "countdown to broadcast wake" with no constraint that it be an accurate count, and the new waiters bit is what really controls whether wakes happen. Rich --gKMricLos+KVdGMg Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="0001-semaphores-fix-missed-wakes-from-ABA-bug-in-waiter-c.patch" >From 159d1f6c02569091c7a48bdb2e2e824b844a1902 Mon Sep 17 00:00:00 2001 From: Rich Felker Date: Tue, 13 Dec 2022 18:39:44 -0500 Subject: [PATCH] semaphores: fix missed wakes from ABA bug in waiter count logic because the has-waiters state in the semaphore value futex word is only representable when the value is zero (the special value -1 represents "0 with potential new waiters"), it's lost if intervening operations make the semaphore value positive again. this creates an ABA issue in sem_post, whereby the post uses a stale waiters count rather than re-evaluating it, skipping the futex wake if the stale count was zero. the fix here is based on a proposal by Alexey Izbyshev, with minor changes to eliminate costly new spurious wake syscalls. the basic idea is to replace the special value -1 with a sticky waiters bit (repurposing the sign bit) preserved under both wait and post. any post that takes place with the waiters bit set will perform a futex wake. to be useful, the waiters bit needs to be removable, and to remove it safely, we perform a broadcast wake instead of a normal single-task wake whenever removing the bit. this lets any un-accounted-for waiters wake and re-add the waiters bit if they still need it. there are multiple possible choices for when to perform this broadcast, but the optimal choice seems to be doing it whenever the observed waiters count is less than two (semantically, this means exactly one, but we might see a stale count of zero). in this case, the expected number of threads to be woken is one, with exactly the same cost as a non-broadcast wake. --- src/thread/sem_getvalue.c | 3 ++- src/thread/sem_post.c | 12 ++++++++---- src/thread/sem_timedwait.c | 10 ++++++---- src/thread/sem_trywait.c | 6 +++--- 4 files changed, 19 insertions(+), 12 deletions(-) diff --git a/src/thread/sem_getvalue.c b/src/thread/sem_getvalue.c index d9d83071..c0b7762d 100644 --- a/src/thread/sem_getvalue.c +++ b/src/thread/sem_getvalue.c @@ -1,8 +1,9 @@ #include +#include int sem_getvalue(sem_t *restrict sem, int *restrict valp) { int val = sem->__val[0]; - *valp = val < 0 ? 0 : val; + *valp = val & SEM_VALUE_MAX; return 0; } diff --git a/src/thread/sem_post.c b/src/thread/sem_post.c index 31e3293d..5c2517f2 100644 --- a/src/thread/sem_post.c +++ b/src/thread/sem_post.c @@ -1,17 +1,21 @@ #include +#include #include "pthread_impl.h" int sem_post(sem_t *sem) { - int val, waiters, priv = sem->__val[2]; + int val, new, waiters, priv = sem->__val[2]; do { val = sem->__val[0]; waiters = sem->__val[1]; - if (val == SEM_VALUE_MAX) { + if ((val & SEM_VALUE_MAX) == SEM_VALUE_MAX) { errno = EOVERFLOW; return -1; } - } while (a_cas(sem->__val, val, val+1+(val<0)) != val); - if (val<0 || waiters) __wake(sem->__val, 1, priv); + new = val + 1; + if (waiters <= 1) + new &= ~0x80000000; + } while (a_cas(sem->__val, val, new) != val); + if (val<0) __wake(sem->__val, waiters>1 ? 1 : -1, priv); return 0; } diff --git a/src/thread/sem_timedwait.c b/src/thread/sem_timedwait.c index 58d3ebfe..aa67376c 100644 --- a/src/thread/sem_timedwait.c +++ b/src/thread/sem_timedwait.c @@ -1,4 +1,5 @@ #include +#include #include "pthread_impl.h" static void cleanup(void *p) @@ -13,14 +14,15 @@ int sem_timedwait(sem_t *restrict sem, const struct timespec *restrict at) if (!sem_trywait(sem)) return 0; int spins = 100; - while (spins-- && sem->__val[0] <= 0 && !sem->__val[1]) a_spin(); + while (spins-- && !(sem->__val[0] & SEM_VALUE_MAX) && !sem->__val[1]) + a_spin(); while (sem_trywait(sem)) { - int r; + int r, priv = sem->__val[2]; a_inc(sem->__val+1); - a_cas(sem->__val, 0, -1); + a_cas(sem->__val, 0, 0x80000000); pthread_cleanup_push(cleanup, (void *)(sem->__val+1)); - r = __timedwait_cp(sem->__val, -1, CLOCK_REALTIME, at, sem->__val[2]); + r = __timedwait_cp(sem->__val, 0x80000000, CLOCK_REALTIME, at, priv); pthread_cleanup_pop(1); if (r) { errno = r; diff --git a/src/thread/sem_trywait.c b/src/thread/sem_trywait.c index 04edf46b..beb435da 100644 --- a/src/thread/sem_trywait.c +++ b/src/thread/sem_trywait.c @@ -1,12 +1,12 @@ #include +#include #include "pthread_impl.h" int sem_trywait(sem_t *sem) { int val; - while ((val=sem->__val[0]) > 0) { - int new = val-1-(val==1 && sem->__val[1]); - if (a_cas(sem->__val, val, new)==val) return 0; + while ((val=sem->__val[0]) & SEM_VALUE_MAX) { + if (a_cas(sem->__val, val, val-1)==val) return 0; } errno = EAGAIN; return -1; -- 2.21.0 --gKMricLos+KVdGMg--