mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: dlopen deadlock
Date: Fri, 15 Jan 2016 14:16:27 -0500	[thread overview]
Message-ID: <20160115191627.GG238@brightrain.aerifal.cx> (raw)
In-Reply-To: <20160115105818.GJ13558@port70.net>

On Fri, Jan 15, 2016 at 11:58:19AM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2016-01-14 21:47:59 -0500]:
> > On Fri, Jan 15, 2016 at 01:31:49AM +0100, Szabolcs Nagy wrote:
> > > i guess using lockfree atomics could solve the deadlock then
> > 
> > I don't think atomics help. We could achieve the same thing as atomics
> > by just taking and releasing the lock on each iteration when modifying
> > the lock-protected state, but not holding the lock while calling the
> > actual ctors.
> > 
> > >From what I can see/remember, the reason I didn't write the code that
> > way is that we don't want dlopen to return before all ctors have run
> > -- or at least started running, in the case of a recursive call to
> > dlopen. If the lock were taken/released on each iteration, two threads
> > simultaneously calling dlopen on the same library libA that depends on
> > libB could each run A's ctors and B's ctors and either of them could
> > return from dlopen before the other finished, resulting in library
> > code running without its ctors having finished.
> > 
> 
> ok then use cond vars:
> 
> for (; p; p=p->next) {
> 	lock(m);
> 	if (p->ctors_started) {
> 		while (!p->ctors_done)
> 			cond_wait(p->cond, m);
> 		unlock(m);
> 		continue;
> 	}
> 	p->ctors_started = 1;
> 	// update fini_head...
> 	unlock(m);
> 
> 	run_ctors(p); // no global lock is held
> 
> 	lock(m);
> 	p->ctors_done = 1;
> 	cond_broadcast(p->cond);
> 	unlock(m);
> }

I agree that cond vars solve the problem. (A semaphore per module
would probably also solve it, but using a cond var that's shared
between all libs would waste less storage at the expense of extremely
rare spurious wakes.) However, the above pseudo-code is probably not
quite right. Only dependencies should be waited on and there are
probably order issues to consider. But yes, this is the right basic
idea for what we should do.

> > The problem is that excluding multiple threads from running
> > possibly-unrelated ctors simultaneously is wrong, and marking a
> > library constructed as soon as its ctors start is also wrong (at least
> > once this big-hammer lock is fixed). Instead we should be doing some
> > sort of proper dependency-graph tracking and ensuring that a dlopen
> > cannot return until all dependencies have completed their ctors,
> 
> this works with cond vars.

OK.

> > except in the special case of recursion, in which case it's acceptable
> > for libX's ctors to load a libY that depends on libX, where libX
> > should be treated as "already constructed" (it's a bug in libX if it
> > has not already completed any initialization that libY might depend
> 
> this does not work.

I think it can be made to work, at least for recursive calls within
the same thread (as opposed to the evil "pthread_create from ctor"
issue) just by tracking the tid of the thread that's "in progress"
running a ctor and refraining from waiting on it when it's the current
thread.

> > on). However I don't see any reasonable way to track this kind of
> > relationship when it happens 'indirectly-recursively' via a new
> > thread. It may just be that such a case should deadlock. However,
> 
> if the dso currently being constructed is stored in tls
> and this info is passed down to every new thread then
> dlopen could skip this dso.
> 
> of course there might be several layers of recursive
> dlopen calls so it is a list of dsos under construction
> so like
> 
> self = pthread_self();
> for (; p; p=p->next) {
> 	if (contains(self->constructing, p))
> 		continue;
> 	...
> 	old = p->next_constructing = self->constructing;
> 	self->constructing = p;
> 	run_ctors(p);
> 	self->constructing = old;
> 	...
> }
> 
> and self->constructing needs to be set up in pthread_create.
> 
> bit overkill for a corner case that never happens
> but it seems possible to do.

Agreed. And I suspect it introduces more broken corner cases than it
fixes...

> > dlopen of separate libs which are unrelated in any dependency sense to
> > the caller should _not_ deadlock just because it happens from a thread
> > created by a ctor...
> 
> i think this works with the simple cond vars approach.

Agreed.

Rich


  reply	other threads:[~2016-01-15 19:16 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-13 11:09 Szabolcs Nagy
2016-01-14 22:41 ` Rich Felker
2016-01-15  0:31   ` Szabolcs Nagy
2016-01-15  2:47     ` Rich Felker
2016-01-15  4:59       ` Rich Felker
2016-01-15 10:58       ` Szabolcs Nagy
2016-01-15 19:16         ` Rich Felker [this message]
2016-03-23 10:55     ` Szabolcs Nagy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160115191627.GG238@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).