mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: Multi-threaded performance progress
Date: Tue, 26 Aug 2014 13:53:04 -0400	[thread overview]
Message-ID: <20140826175304.GD12888@brightrain.aerifal.cx> (raw)
In-Reply-To: <1409070919.8054.47.camel@eris.loria.fr>

I should have bothered to check and see that there was a patch before
responding...

On Tue, Aug 26, 2014 at 06:35:19PM +0200, Jens Gustedt wrote:
> diff --git a/src/thread/pthread_cond_timedwait.c b/src/thread/pthread_cond_timedwait.c
> index 2d192b0..136fa6a 100644
> --- a/src/thread/pthread_cond_timedwait.c
> +++ b/src/thread/pthread_cond_timedwait.c
> @@ -42,12 +42,32 @@ static inline void lock(volatile int *l)
>  	}
>  }
>  
> +/* Avoid taking the lock if we know it isn't necessary. */
> +static inline int lockRace(volatile int *l, int*volatile* notifier)
> +{
> +	int ret = 1;
> +	if (!*notifier && (ret = a_cas(l, 0, 1))) {
> +		a_cas(l, 1, 2);
> +		do __wait(l, 0, 2, 1);
> +		while (!*notifier && (ret = a_cas(l, 0, 2)));
> +	}
> +	return ret;
> +}

This code was confusing at first, and technically *notifier is
accessed without synchronization -- it may be written by one thread
and read by another without any intervening barrier. I suspect it
works, but I'd like to avoid it. But it does answer my question about
how you were detecting the case of "don't need to lock".

Note that the performance of the code in which you're trying to avoid
the lock does not matter in the slightest except when a race happens
between a thread acting on cancellation or timeout and a signaler
(since that's the only time it runs). I expect this is extremely rare,
so unless we determine otherwise, I'd rather not add complexity here.

>  int pthread_cond_timedwait(pthread_cond_t *restrict c, pthread_mutex_t *restrict m, const struct timespec *restrict ts)
>  {
> -	struct waiter node = { .cond = c, .mutex = m };
> +	struct waiter node = { .cond = c, .mutex = m, .state = WAITING, .barrier = 2 };

This may slightly pessimize process-shared cond vars, but seems ok.

> +static inline int cond_signal (struct waiter * p, int* ref)
> +{
> +	int ret = a_cas(&p->state, WAITING, SIGNALED);
> +	if (ret != WAITING) {
> +		ref[0]++;
> +		p->notify = ref;
> +		if (p->prev) p->prev->next = p->next;
> +		if (p->next) p->next->prev = p->prev;
> +		p->next = 0;
> +		p->prev = 0;
> +	}
> +	return ret;
> +}
> +
>  int __private_cond_signal(pthread_cond_t *c, int n)
>  {
> -	struct waiter *p, *first=0;
> -	int ref = 0, cur;
> +	struct waiter *p, *prev, *first=0;
> +	int ref[2] = { 0 }, cur;
>  
> -	lock(&c->_c_lock);
> -	for (p=c->_c_tail; n && p; p=p->prev) {
> -		if (a_cas(&p->state, WAITING, SIGNALED) != WAITING) {
> -			ref++;
> -			p->notify = &ref;
> +	if (n == 1) {
> +		lock(&c->_c_lock);
> +		for (p=c->_c_tail; p; p=prev) {
> +			prev = p->prev;
> +			if (!cond_signal(p, ref)) {
> +				first=p;
> +				p=prev;
> +				first->prev = 0;
> +				break;
> +			}
> +		}
> +		/* Split the list, leaving any remainder on the cv. */
> +		if (p) {
> +			p->next = 0;
>  		} else {
> -			n--;
> -			if (!first) first=p;
> +			c->_c_head = 0;
>  		}
> -	}
> -	/* Split the list, leaving any remainder on the cv. */
> -	if (p) {
> -		if (p->next) p->next->prev = 0;
> -		p->next = 0;
> +		c->_c_tail = p;
> +		unlockRace(&c->_c_lock, ref[0]);
>  	} else {
> -		c->_c_head = 0;
> +		lock(&c->_c_lock);
> +                struct waiter * head = c->_c_head;
> +		if (head) {
> +			/* Signal head and tail first to reduce possible
> +			 * races for the cv to the beginning of the
> +			 * processing. */
> +			int headrace = cond_signal(head, ref);
> +			struct waiter * tail = c->_c_tail;
> +			p=tail->prev;
> +			if (tail != head) {
> +				if (!cond_signal(tail, ref)) first=tail;
> +				else while (p != head) {
> +					prev = p->prev;
> +					if (!cond_signal(p, ref)) {
> +						first=p;
> +						p=prev;
> +						break;
> +					}
> +					p=prev;
> +				}
> +			}
> +			if (!first && !headrace) first = head;
> +			c->_c_head = 0;
> +			c->_c_tail = 0;
> +			/* Now process the inner part of the list. */
> +			if (p) {
> +				while (p != head) {
> +					prev = p->prev;
> +					cond_signal(p, ref);
> +					p=prev;
> +				}
> +			}
> +		}
> +		unlockRace(&c->_c_lock, ref[0]);
>  	}
> -	c->_c_tail = p;
> -	unlock(&c->_c_lock);
>  
>  	/* Wait for any waiters in the LEAVING state to remove
>  	 * themselves from the list before returning or allowing
>  	 * signaled threads to proceed. */
> -	while ((cur = ref)) __wait(&ref, 0, cur, 1);
> +	while ((cur = ref[0])) __wait(&ref[0], &ref[1], cur, 1);
>  
>  	/* Allow first signaled waiter, if any, to proceed. */
>  	if (first) unlock(&first->barrier);

This is sufficiently more complex that I think we'd need evidence that
the races actually adversely affect performance for it to be
justified. And if it is justified, the code should probably be moved
to pthread_cond_signal and pthread_cond_broadcast rather than 2 cases
in one function here. The only reason both cases are covered in one
function now is that they're naturally trivial special cases of the
exact same code.

Rich


  parent reply	other threads:[~2014-08-26 17:53 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-26  3:43 Rich Felker
2014-08-26  7:04 ` Jens Gustedt
2014-08-26 10:44   ` Szabolcs Nagy
2014-08-26 11:09     ` Jens Gustedt
2014-08-26 16:35   ` Jens Gustedt
2014-08-26 17:32     ` Rich Felker
2014-08-26 17:53     ` Rich Felker [this message]
2014-08-26 18:30       ` Jens Gustedt
2014-08-26 19:05         ` Rich Felker
2014-08-26 19:34           ` Jens Gustedt
2014-08-26 20:26             ` Rich Felker
2014-08-26 21:15               ` Jens Gustedt
2014-08-26 21:36                 ` Rich Felker
2014-08-27  9:53                   ` Jens Gustedt
2014-08-27 16:47                     ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140826175304.GD12888@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).