From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/8415 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH 1/2] let them spin Date: Sun, 30 Aug 2015 13:02:39 -0400 Message-ID: <20150830170239.GM7833@brightrain.aerifal.cx> References: <1440838179.693.4.camel@dysnomia.u-strasbg.fr> <20150829171612.GD7833@brightrain.aerifal.cx> <1440877110.693.16.camel@inria.fr> <20150830003905.GH7833@brightrain.aerifal.cx> <1440924113.693.18.camel@inria.fr> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1440954179 3260 80.91.229.3 (30 Aug 2015 17:02:59 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sun, 30 Aug 2015 17:02:59 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-8427-gllmg-musl=m.gmane.org@lists.openwall.com Sun Aug 30 19:02:54 2015 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1ZW60M-0001q4-1v for gllmg-musl@m.gmane.org; Sun, 30 Aug 2015 19:02:54 +0200 Original-Received: (qmail 30640 invoked by uid 550); 30 Aug 2015 17:02:52 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 30622 invoked from network); 30 Aug 2015 17:02:52 -0000 Content-Disposition: inline In-Reply-To: <1440924113.693.18.camel@inria.fr> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:8415 Archived-At: On Sun, Aug 30, 2015 at 10:41:53AM +0200, Jens Gustedt wrote: > > > So the difference isn't dramatic, just one order of magnitude and > > > everybody gets his chance. These chances are not equal, sure, but > > > NEVER in capitals is certainly a big word. > > > > Try this: on a machine with at least 3 physical cores, 3 threads > > hammer on the same lock, counting the number of times they succeed in > > taking it. Once any one thread has taken it at least 10 million times > > or so, stop and print the counts. With your spin strategy I would > > expect to see 2 threads with counts near 10 million and one thread > > with a count in the hundreds or less, maybe even a single-digit count. > > With the current behavior (never spinning if there's a waiter) I would > > expect all 3 counts to be similar. > > The setting that you describe is really a pathological one, where the > threads don't do any work between taking the lock and releasing it. Do > I understand that correctly? If you're using locks to implement fake greater-than-wordsize atomics then it's the normal case, not a pathological one. You have (effectively) things like: _Atomic long double x; __lock(global_lock); x++; __unlock(global_lock); For a more realistic example, consider atomic CAS on a linked-list prev/next pointer pair. Rich