mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: [PATCH 1/2] let them spin
Date: Sat, 29 Aug 2015 13:16:12 -0400	[thread overview]
Message-ID: <20150829171612.GD7833@brightrain.aerifal.cx> (raw)
In-Reply-To: <1440838179.693.4.camel@dysnomia.u-strasbg.fr>

On Sat, Aug 29, 2015 at 10:50:44AM +0200, Jens Gustedt wrote:
> Remove a test in __wait that looked if other threads already attempted to
> go to sleep in futex_wait.
> 
> This has no impact on the fast path. But other than one might think at a
> first glance, this slows down if there is congestion.
> 
> Applying this patch shows no difference in behavior in a mono-core
> setting, so it seems that this shortcut is just superfluous.

The purpose of this code is is twofold: improving fairness of the lock
and avoiding burning cpu time that's _known_ to be a waste.

If you spin on a lock that already has waiters, the thread that spins
has a much better chance to get the lock than any of the existing
waiters which are in futex_wait. Assuming sufficiently many cores that
all threads that are not sleeping don't get preempted, the spinning
thread is basically guaranteed to get the lock unless it spins so long
to make it futex_wait. This is simply because returning from
futex_wake (which all the other waiters have to do) takes a lot more
time than one spin. I suspect there are common loads under which many
of the waiters will NEVER get the lock.

The other issue is simply wasted cpu time. Unlike in the no-waiters
case where spinning _can_ save a lot of cpu time (no futex_wait or
futex_wake syscall), spinning in the with-waiters case always wastes
cpu time. If the spinning thread gets the lock when the old owner
unlocks it, the old owner has already determined that it has to send a
futex_wake. So now you have (1) the wasted futex_wake syscall in the
old owner thread and (2) a "spurious" wake in one of the waiters,
which wakes up only to find that the lock was already stolen, and
therefore has to call futex_wait again and start the whole process
over.

Rich


  reply	other threads:[~2015-08-29 17:16 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-29  8:50 Jens Gustedt
2015-08-29 17:16 ` Rich Felker [this message]
2015-08-29 19:38   ` Jens Gustedt
2015-08-30  0:39     ` Rich Felker
2015-08-30  8:41       ` Jens Gustedt
2015-08-30 17:02         ` Rich Felker
2015-08-30 21:48           ` Jens Gustedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150829171612.GD7833@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).