From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/5757 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Explaining cond var destroy [Re: [musl] C threads, v3.0] Date: Wed, 6 Aug 2014 12:15:52 -0400 Message-ID: <20140806161552.GB1674@brightrain.aerifal.cx> References: <1407144603.8274.248.camel@eris.loria.fr> <20140806035213.GR1674@brightrain.aerifal.cx> <1407314595.24324.277.camel@eris.loria.fr> <1407318064.24324.282.camel@eris.loria.fr> <20140806100335.GV1674@brightrain.aerifal.cx> <1407321172.24324.287.camel@eris.loria.fr> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1407341776 10365 80.91.229.3 (6 Aug 2014 16:16:16 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Wed, 6 Aug 2014 16:16:16 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-5762-gllmg-musl=m.gmane.org@lists.openwall.com Wed Aug 06 18:16:08 2014 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1XF3sk-0000Z3-2x for gllmg-musl@plane.gmane.org; Wed, 06 Aug 2014 18:16:06 +0200 Original-Received: (qmail 26009 invoked by uid 550); 6 Aug 2014 16:16:05 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 26001 invoked from network); 6 Aug 2014 16:16:04 -0000 Content-Disposition: inline In-Reply-To: <1407321172.24324.287.camel@eris.loria.fr> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:5757 Archived-At: On Wed, Aug 06, 2014 at 12:32:52PM +0200, Jens Gustedt wrote: > > I think, and perhaps > > other serious drawbacks too. If I remember correctly, musl's original > > implementation looked something like the above, but I was also having > > a hard time figuring out how to work requeue-to-mutex (which makes a > > big deal to performance on big broadcasts) into it. > > Look into pthread_cond_broadcast. I think this could do better > bookkeeping. Currently it does its case analysis with a mildly > complicated expression in preparing the arguments to the > requeue-syscall. I don't think that expression is the problem. What it's doing is very simple: it's useless to wake any threads if the thread calling pthread_cond_broadcast currently holds the mutex, since the woken thread would immediately block again on the mutex. Instead, one thread will be woken when the mutex is unlocked. This is ensured by "moving" any remaining wait count from the cond var waiters to the mutex waiters (so that the mutex unlock will cause a futex wake). > I think we could be better off by doing a case > analysis beforehand and update the different "waiters" variables > accordingly *before* the syscall. So the targeted threads wouldn't > have to do any bookkeeping when coming back from wait. One problem with this is that being woken by a signal isn't the only way a cond var wait can end. It can also end due to timeout or cancellation, and in that case, there's no signaling thread to be responsible for the bookkeeping, but if another thread calls pthread_cond_broadcast at this point, it can legally destroy and free the cond var, despite the timed-out/cancelled wait still having its own bookkeeping to do. The best possible solution would be if we could eliminate the need for this bookkeeping entirely, but I don't see any easy way to do that. Rich