From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/5909 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Multi-threaded performance progress Date: Tue, 26 Aug 2014 15:05:07 -0400 Message-ID: <20140826190507.GE12888@brightrain.aerifal.cx> References: <20140826034321.GA13999@brightrain.aerifal.cx> <1409036654.4835.14.camel@eris.loria.fr> <1409070919.8054.47.camel@eris.loria.fr> <20140826175304.GD12888@brightrain.aerifal.cx> <1409077839.8054.54.camel@eris.loria.fr> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1409079931 9134 80.91.229.3 (26 Aug 2014 19:05:31 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Tue, 26 Aug 2014 19:05:31 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-5915-gllmg-musl=m.gmane.org@lists.openwall.com Tue Aug 26 21:05:25 2014 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1XMM3W-00069T-7z for gllmg-musl@plane.gmane.org; Tue, 26 Aug 2014 21:05:22 +0200 Original-Received: (qmail 21744 invoked by uid 550); 26 Aug 2014 19:05:21 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 21736 invoked from network); 26 Aug 2014 19:05:20 -0000 Content-Disposition: inline In-Reply-To: <1409077839.8054.54.camel@eris.loria.fr> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:5909 Archived-At: On Tue, Aug 26, 2014 at 08:30:39PM +0200, Jens Gustedt wrote: > Am Dienstag, den 26.08.2014, 13:53 -0400 schrieb Rich Felker: > > I should have bothered to check and see that there was a patch before > > responding... > > > > On Tue, Aug 26, 2014 at 06:35:19PM +0200, Jens Gustedt wrote: > > > diff --git a/src/thread/pthread_cond_timedwait.c b/src/thread/pthread_cond_timedwait.c > > > index 2d192b0..136fa6a 100644 > > > --- a/src/thread/pthread_cond_timedwait.c > > > +++ b/src/thread/pthread_cond_timedwait.c > > > @@ -42,12 +42,32 @@ static inline void lock(volatile int *l) > > > } > > > } > > > > > > +/* Avoid taking the lock if we know it isn't necessary. */ > > > +static inline int lockRace(volatile int *l, int*volatile* notifier) > > > +{ > > > + int ret = 1; > > > + if (!*notifier && (ret = a_cas(l, 0, 1))) { > > > + a_cas(l, 1, 2); > > > + do __wait(l, 0, 2, 1); > > > + while (!*notifier && (ret = a_cas(l, 0, 2))); > > > + } > > > + return ret; > > > +} > > > > This code was confusing at first, and technically *notifier is > > accessed without synchronization -- it may be written by one thread > > and read by another without any intervening barrier. I suspect it > > works, but I'd like to avoid it. But it does answer my question about > > how you were detecting the case of "don't need to lock". > > Hm, I don't see the point of your remark, musl is doing such "atomic" > reads all over, no? Only when the write is atomic. Here, it's not. Aside from theoretical issues with this being UB, there's nothing to order the store to notify with respect to other stores/loads done in the signaling thread. For example, ref[0]++ might be reordered after p->notify=ref, or the writes to modify the list might be reordered before it. I haven't analyzed all the possible consequences of such reordering, but it's unlikely to be safe. (Note that volatile only precludes reordering between multiple volatile accesses, not between volatile and non-volatile. But even if it did, it wouldn't enforce an ordering on cache synchronization between cores.) > There is no macro for atomic load, only for atomic > store, otherwise I would have used that. And notice that *notifier is > volatile, so the load can't be optimized out. > > Or do you mean that I should use an atomic store at the other end? Yes. With an atomic store at the other end, I think it could be correct, but I'd need to review it further to be sure. > > Note that the performance of the code in which you're trying to avoid > > the lock does not matter in the slightest except when a race happens > > between a thread acting on cancellation or timeout and a signaler > > (since that's the only time it runs). I expect this is extremely rare, > > so unless we determine otherwise, I'd rather not add complexity here. > > If we have a broadcaster working a long list of waiters, this might > still happen sufficiently often. And the "complexity" is hidden in the > execution pattern of the current version, where control and handling > of the list alternates between different threads, potentially as many > times as there are waiters in the list. Does your code eliminate that all the time? If so it's more interesting. I'll look at it more. > > This is sufficiently more complex that I think we'd need evidence that > > the races actually adversely affect performance for it to be > > justified. > > I can imagine that seen as it is here, this looks invasive, but it > isn't really. It changes the code from one code path with a parameter that can be read and understood almost immediately to a number of special cases (one-element list, etc.) and it has additional confusing synchronization logic with the notify pointer being used to convey information outside of lock. > This is just the merge of about 10 patches that more or > less rewrite to something functionally equivalent. (well, with some > obvious exeptions.) I found posting all those patches too much for the > list, was I wrong? Well the mix of refactoring (arguably gratuitous) with functional changes may be making it look more complex than it is, so I'll see if, on further reading, it's simpler than it looks (or easily simplified). > > And if it is justified, the code should probably be moved > > to pthread_cond_signal and pthread_cond_broadcast rather than 2 cases > > in one function here. The only reason both cases are covered in one > > function now is that they're naturally trivial special cases of the > > exact same code. > > Sure, that could and should perhaps be done, although the fact working > on just one file and being able to have a bunch of helper functions as > static inline was quite convenient. Yes, for working on tweaking it, that probably makes things easier. Rich