From: Jens Gustedt <jens.gustedt@inria.fr>
To: musl@lists.openwall.com
Subject: Re: [PATCH 1/2] let them spin
Date: Sat, 29 Aug 2015 21:38:30 +0200 [thread overview]
Message-ID: <1440877110.693.16.camel@inria.fr> (raw)
In-Reply-To: <20150829171612.GD7833@brightrain.aerifal.cx>
[-- Attachment #1: Type: text/plain, Size: 3734 bytes --]
Rich,
I hope with the graphics at hand, you'll understand a bit better what
I was up to. Again my apologies.
Still I try to answer to some of the arguments that you give below:
Am Samstag, den 29.08.2015, 13:16 -0400 schrieb Rich Felker:
> On Sat, Aug 29, 2015 at 10:50:44AM +0200, Jens Gustedt wrote:
> > Remove a test in __wait that looked if other threads already attempted to
> > go to sleep in futex_wait.
> >
> > This has no impact on the fast path. But other than one might think at a
> > first glance, this slows down if there is congestion.
> >
> > Applying this patch shows no difference in behavior in a mono-core
> > setting, so it seems that this shortcut is just superfluous.
>
> The purpose of this code is is twofold: improving fairness of the lock
> and avoiding burning cpu time that's _known_ to be a waste.
>
> If you spin on a lock that already has waiters, the thread that spins
> has a much better chance to get the lock than any of the existing
> waiters which are in futex_wait. Assuming sufficiently many cores that
> all threads that are not sleeping don't get preempted, the spinning
> thread is basically guaranteed to get the lock unless it spins so long
> to make it futex_wait. This is simply because returning from
> futex_wake (which all the other waiters have to do) takes a lot more
> time than one spin. I suspect there are common loads under which many
> of the waiters will NEVER get the lock.
Yes and no. I benched the things to know a bit more. On my machine one
loop in a spin lock is just about 10 times faster than a failed call
to futex_wait.
Even for the current strategy, one of the futex_waiting threads gets
woken up and gets his chance with a new spinning phase.
So the difference isn't dramatic, just one order of magnitude and
everybody gets his chance. These chances are not equal, sure, but
NEVER in capitals is certainly a big word.
On the other hands the difference in throughput on the multi-core
setting between the different spin versions is dramatic for malloc, I
find.
> The other issue is simply wasted cpu time. Unlike in the no-waiters
> case where spinning _can_ save a lot of cpu time (no futex_wait or
> futex_wake syscall), spinning in the with-waiters case always wastes
> cpu time. If the spinning thread gets the lock when the old owner
> unlocks it, the old owner has already determined that it has to send a
> futex_wake. So now you have (1) the wasted futex_wake syscall in the
> old owner thread and (2) a "spurious" wake in one of the waiters,
> which wakes up only to find that the lock was already stolen, and
> therefore has to call futex_wait again and start the whole process
> over.
This wasted CPU time was one of my worries at first, this is why I
made the mono core test. Basically on mono core spinning is *always* a
waste, and going into futex_wait immediately should always be a
win. (BTW, the current implementation should have a performance gap
for exactly 2 threads that alternate on the lock.)
But, there is almost no difference between the two strategies, the
bias even here goes a tiny bit towards doing some spin, even under
very high load. So, no, I don't think there is much wasted CPU
time. At least performance seems to be largely dominated by other
things, so "spinning" shouldn't be a worry for that in the
current state of things.
Jens
--
:: INRIA Nancy Grand Est ::: Camus ::::::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536 ::
:: :::::::::::::::::::::: gsm France : +33 651400183 ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
next prev parent reply other threads:[~2015-08-29 19:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-29 8:50 Jens Gustedt
2015-08-29 17:16 ` Rich Felker
2015-08-29 19:38 ` Jens Gustedt [this message]
2015-08-30 0:39 ` Rich Felker
2015-08-30 8:41 ` Jens Gustedt
2015-08-30 17:02 ` Rich Felker
2015-08-30 21:48 ` Jens Gustedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1440877110.693.16.camel@inria.fr \
--to=jens.gustedt@inria.fr \
--cc=musl@lists.openwall.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/musl/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).