From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/9131 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: dlopen deadlock Date: Fri, 15 Jan 2016 14:16:27 -0500 Message-ID: <20160115191627.GG238@brightrain.aerifal.cx> References: <20160113110937.GE13558@port70.net> <20160114224115.GW238@brightrain.aerifal.cx> <20160115003148.GH13558@port70.net> <20160115024759.GD238@brightrain.aerifal.cx> <20160115105818.GJ13558@port70.net> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1452885404 27106 80.91.229.3 (15 Jan 2016 19:16:44 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Fri, 15 Jan 2016 19:16:44 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-9144-gllmg-musl=m.gmane.org@lists.openwall.com Fri Jan 15 20:16:44 2016 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1aK9rW-0007wX-TK for gllmg-musl@m.gmane.org; Fri, 15 Jan 2016 20:16:43 +0100 Original-Received: (qmail 27903 invoked by uid 550); 15 Jan 2016 19:16:41 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 27871 invoked from network); 15 Jan 2016 19:16:40 -0000 Content-Disposition: inline In-Reply-To: <20160115105818.GJ13558@port70.net> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:9131 Archived-At: On Fri, Jan 15, 2016 at 11:58:19AM +0100, Szabolcs Nagy wrote: > * Rich Felker [2016-01-14 21:47:59 -0500]: > > On Fri, Jan 15, 2016 at 01:31:49AM +0100, Szabolcs Nagy wrote: > > > i guess using lockfree atomics could solve the deadlock then > > > > I don't think atomics help. We could achieve the same thing as atomics > > by just taking and releasing the lock on each iteration when modifying > > the lock-protected state, but not holding the lock while calling the > > actual ctors. > > > > >From what I can see/remember, the reason I didn't write the code that > > way is that we don't want dlopen to return before all ctors have run > > -- or at least started running, in the case of a recursive call to > > dlopen. If the lock were taken/released on each iteration, two threads > > simultaneously calling dlopen on the same library libA that depends on > > libB could each run A's ctors and B's ctors and either of them could > > return from dlopen before the other finished, resulting in library > > code running without its ctors having finished. > > > > ok then use cond vars: > > for (; p; p=p->next) { > lock(m); > if (p->ctors_started) { > while (!p->ctors_done) > cond_wait(p->cond, m); > unlock(m); > continue; > } > p->ctors_started = 1; > // update fini_head... > unlock(m); > > run_ctors(p); // no global lock is held > > lock(m); > p->ctors_done = 1; > cond_broadcast(p->cond); > unlock(m); > } I agree that cond vars solve the problem. (A semaphore per module would probably also solve it, but using a cond var that's shared between all libs would waste less storage at the expense of extremely rare spurious wakes.) However, the above pseudo-code is probably not quite right. Only dependencies should be waited on and there are probably order issues to consider. But yes, this is the right basic idea for what we should do. > > The problem is that excluding multiple threads from running > > possibly-unrelated ctors simultaneously is wrong, and marking a > > library constructed as soon as its ctors start is also wrong (at least > > once this big-hammer lock is fixed). Instead we should be doing some > > sort of proper dependency-graph tracking and ensuring that a dlopen > > cannot return until all dependencies have completed their ctors, > > this works with cond vars. OK. > > except in the special case of recursion, in which case it's acceptable > > for libX's ctors to load a libY that depends on libX, where libX > > should be treated as "already constructed" (it's a bug in libX if it > > has not already completed any initialization that libY might depend > > this does not work. I think it can be made to work, at least for recursive calls within the same thread (as opposed to the evil "pthread_create from ctor" issue) just by tracking the tid of the thread that's "in progress" running a ctor and refraining from waiting on it when it's the current thread. > > on). However I don't see any reasonable way to track this kind of > > relationship when it happens 'indirectly-recursively' via a new > > thread. It may just be that such a case should deadlock. However, > > if the dso currently being constructed is stored in tls > and this info is passed down to every new thread then > dlopen could skip this dso. > > of course there might be several layers of recursive > dlopen calls so it is a list of dsos under construction > so like > > self = pthread_self(); > for (; p; p=p->next) { > if (contains(self->constructing, p)) > continue; > ... > old = p->next_constructing = self->constructing; > self->constructing = p; > run_ctors(p); > self->constructing = old; > ... > } > > and self->constructing needs to be set up in pthread_create. > > bit overkill for a corner case that never happens > but it seems possible to do. Agreed. And I suspect it introduces more broken corner cases than it fixes... > > dlopen of separate libs which are unrelated in any dependency sense to > > the caller should _not_ deadlock just because it happens from a thread > > created by a ctor... > > i think this works with the simple cond vars approach. Agreed. Rich