From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/11533 Path: news.gmane.org!.POSTED!not-for-mail From: Alexander Monakov Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH] (V2) a new lock algorithm with lock value and CS counts in the same atomic int Date: Tue, 20 Jun 2017 17:41:35 +0300 (MSK) Message-ID: References: <868874$8aqkr1@mail2-relais-roc.national.inria.fr> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Trace: blaine.gmane.org 1497969714 3932 195.159.176.226 (20 Jun 2017 14:41:54 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Tue, 20 Jun 2017 14:41:54 +0000 (UTC) User-Agent: Alpine 2.20.13 (LNX 116 2015-12-14) Cc: Jens Gustedt To: musl@lists.openwall.com Original-X-From: musl-return-11546-gllmg-musl=m.gmane.org@lists.openwall.com Tue Jun 20 16:41:45 2017 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.84_2) (envelope-from ) id 1dNKLg-0000TN-7o for gllmg-musl@m.gmane.org; Tue, 20 Jun 2017 16:41:44 +0200 Original-Received: (qmail 24214 invoked by uid 550); 20 Jun 2017 14:41:47 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 24196 invoked from network); 20 Jun 2017 14:41:47 -0000 In-Reply-To: <868874$8aqkr1@mail2-relais-roc.national.inria.fr> Xref: news.gmane.org gmane.linux.lib.musl.general:11533 Archived-At: On Fri, 16 Jun 2017, Jens Gustedt wrote: > --- a/src/thread/__lock.c > +++ b/src/thread/__lock.c > @@ -2,14 +2,46 @@ > > void __lock(volatile int *l) > { [...] > + // Spinning failed, so mark ourselves as being inside the CS. > + current = a_fetch_add(l, 1) + 1; > + /* The main lock acquisition loop for heavy congestion. The only > + change to the value performed inside that loop is a successful > + lock via the CAS that acquires the lock. */ > + for (;;) { > + /* We can only go into wait, if we know that somebody holds the > + lock and will eventually wake us up, again. */ > + if (current < 0) { > + __futexwait(l, current, 1); Here you reuse the value of 'current' after resuming after a futex-wait. But if the lock is under contention, with high probability other threads have modified the lock while we were sleeping, so the value of 'current' is likely stale, and the following a_cas is pretty much guaranteed to fail. Wouldn't it be better to reload current = *l after resuming? Alexander > + current -= INT_MIN + 1; > + } > + /* assertion: current > 0, because the count > + includes us already. */ > + int val = a_cas(l, current, INT_MIN + current); > + if (val == current) return; > + current = val; > + } > }