mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: bug in pthread_cond_broadcast
Date: Wed, 13 Aug 2014 00:11:09 -0400	[thread overview]
Message-ID: <20140813041109.GI12888@brightrain.aerifal.cx> (raw)
In-Reply-To: <20140812233310.GF12888@brightrain.aerifal.cx>

On Tue, Aug 12, 2014 at 07:33:10PM -0400, Rich Felker wrote:
> One potential solution I have in mind is to get rid of this complex
> waiter accounting by:
> 
> 1. Having pthread_cond_broadcast set the new-waiter state on the mutex
>    when requeuing, so that the next unlock forces a futex wake that
>    otherwise might not happen.
> 
> 2. Having pthread_cond_timedwait set the new-waiter state on the mutex
>    after relocking it, either unconditionally (this would be rather
>    expensive) or on some condition. One possible condition would be to
>    keep a counter in the condvar object for the number of waiters that
>    have been requeued, incrementing it by the number requeued at
>    broadcast time and decrementing it on each wake. However the latter
>    requires accessing the cond var memory in the return path for wait,
>    and I don't see any good way around this. Maybe there's a way to
>    use memory on the waiters' stacks?

On further consideration, I don't think this works. If a thread other
than one of the cv waiters happened to get the mutex first, it would
fail to set the new-waiter state again at unlock time, and the waiters
could get stuck never waking up.

So I think it's really necessary to move the waiter count to the
mutex.

One way to do this with no synchronization cost at signal time would
be to have waiters increment the mutex waiter count before waiting on
the cv futex, but of course this could yield a lot of spurious futex
wake syscalls for the mutex if other threads are locking and unlocking
the mutex before the signal occurs.

I think the other way you proposed is in some ways ideal, but also
possibly unachievable. While the broadcasting thread can know how many
threads it requeued, the requeued threads seem to have no way of
knowing that they were requeued after the futex wait returns. Even
after they were successfully requeued, the futex wait could return
with a timeout or EINTR or similar, in which case there seems to be no
way for the waiter to know whether it needs to decrement the mutex
waiter count. I don't see any solution to this problem...

Rich


  reply	other threads:[~2014-08-13  4:11 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-11 23:58 Jens Gustedt
2014-08-12 16:50 ` Szabolcs Nagy
2014-08-12 17:19   ` Rich Felker
2014-08-12 18:18     ` Jens Gustedt
2014-08-12 21:20       ` Rich Felker
2014-08-12 22:50         ` Jens Gustedt
2014-08-12 23:33           ` Rich Felker
2014-08-13  4:11             ` Rich Felker [this message]
2014-08-13  0:30           ` Rich Felker
2014-08-13  7:00             ` Jens Gustedt
2014-08-13 12:34               ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140813041109.GI12888@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).