mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Jens Gustedt <jens.gustedt@inria.fr>
To: musl@lists.openwall.com
Subject: Re: In-progress stuff, week of Sept 1
Date: Sat, 06 Sep 2014 09:04:14 +0200	[thread overview]
Message-ID: <1409987054.4856.98.camel@eris.loria.fr> (raw)
In-Reply-To: <20140905164510.GM23797@brightrain.aerifal.cx>

[-- Attachment #1: Type: text/plain, Size: 3578 bytes --]

Am Freitag, den 05.09.2014, 12:45 -0400 schrieb Rich Felker:
> On Fri, Sep 05, 2014 at 11:27:52AM +0200, Jens Gustedt wrote:
> > again, out of the head, i remember two places that have "adhoc"
> > locking conventions that we touched in the last weeks, cv and
> > pthread_once_t. Both encode waiters in a non-optimal way: cv only has
> > an estimate of "somebody might be waiting", pthread_once_t uses a
> > static `waiters` variable for all instances.
> 
> Yes, this is bad, but pthread_once_t would be fine with a waiters flag
> in the futex int. I'll look at improving that later. But since the
> number of pthread_once_t objects in the whole program is
> finite/bounded, it's really irrelevant to performance.
> 
> > > but it's a bad idea to have the waiters
> > > count inside the futex int. This is because every time the futex int
> > > changes, you risk FUTEX_WAIT returning with EAGAIN. Under high load of
> > > waiters arriving/leaving, it might be impossible to _ever_ start a
> > > wait; all threads may just spin. The futex int should be something
> > > that doesn't change except for a very small finite number of changes
> > > (like adding a waiter flag, not a waiter count) before you actually
> > > have a chance of successfully getting the lock or whatever.
> > 
> > First, I think you overestimate the trouble that we can have with
> > that.
> 
> Actually I don't. It follows from the massive performance problems I
> had with the original cv implementation where wait was "write your own
> tid then futex wait" and signal was "write your own tid then futex
> wake". This is trivially correct and avoids sequence number wrapping
> issues like the current pshared implementation, but it had a HUGE
> number of spurious wakes: something like 3:1 (spurious:genuine) for
> several threads hammering the cv. 

The situation would be different here. A thread would only update the
waiters part of the `int` once, and then attempt to do the futex wait
in a loop (while EAGAIN) without changing it again.

> On typical cpus, there's a window of 500+ cycles between the atomic
> operation in userspace and rechecking it in kernelspace. This is
> plenty time for another core to cause your futex wait to EAGAIN.
> 
> > In any case, for the two use cases I mention above this problem isn't
> > much relevant, I think. For cv this is the internal lock, there will
> > be at most one of "waiting" threads on that (he has to hold the mutex)
> > and perhaps several signaling threads, but it should be rare that this
> > is contended by more than two threads at a time. For the current
> > version the "may-have-waiters" flag may trigger some superfluous wake
> > up, I think.
> 
> This is possibly true, but would need some analysis, and thinking
> about it now is distracting me from getting done other things that we
> want to get done now. ;-)
> 
> > And pthread_once_t is of minor performance importance, anyhow.
> 
> I don't see how a may-have-waiters approach has any cost for
> pthread_once_t unless some of the waiters are cancelled. And that's a
> pathological case that's not worth optimizing.

right, at least here such an approach would help to avoid the ugly
static "waiters" variable.

Jens

-- 
:: INRIA Nancy Grand Est ::: AlGorille ::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536   ::
:: :::::::::::::::::::::: gsm France : +33 651400183   ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

  parent reply	other threads:[~2014-09-06  7:04 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-05  1:14 Rich Felker
2014-09-05  1:57 ` Szabolcs Nagy
2014-09-05  8:47   ` Rich Felker
2014-09-05  9:07     ` Alexander Monakov
2014-09-05 16:02       ` Rich Felker
2014-09-05 17:40         ` Alexander Monakov
2014-09-05 18:06           ` Rich Felker
2014-09-05  7:41 ` Jens Gustedt
2014-09-05  8:32   ` Rich Felker
2014-09-05  9:27     ` Jens Gustedt
2014-09-05 16:45       ` Rich Felker
2014-09-05 17:09         ` Alexander Monakov
2014-09-06  7:04         ` Jens Gustedt [this message]
2014-09-05  8:49 ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1409987054.4856.98.camel@eris.loria.fr \
    --to=jens.gustedt@inria.fr \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).