mailing list of musl libc
 help / color / mirror / code / Atom feed
* dlopen deadlock
@ 2016-01-13 11:09 Szabolcs Nagy
  2016-01-14 22:41 ` Rich Felker
  0 siblings, 1 reply; 8+ messages in thread
From: Szabolcs Nagy @ 2016-01-13 11:09 UTC (permalink / raw)
  To: musl

This bug i reported against glibc also affects musl:
https://sourceware.org/bugzilla/show_bug.cgi?id=19448

in case of musl it's not the global load lock, but the
init_fini_lock that causes the problem.

the multi-threadedness detection is also problematic in
do_init_fini:

	need_locking = has_threads
	if (need_locking)
		lock(init_fini_lock)
	for all deps
		run_ctors(dep)
		if (!need_locking && has_threads)
			need_locking = 1
			lock(init_fini_lock)
	if (need_locking)
		unlock(init_fini_lock)

checking for threads after ctors are run is too late if
the ctors may start new threads that can dlopen libs with
common deps with the currently loaded lib.

one solution i can think of is to have an init_fini_lock
for each dso, then the deadlock only happens if a ctor
tries to dlopen its own lib (directly or indirectly)
which is nonsense (the library depends on itself being
loaded)

(in glibc with a self-loading ctor in the same thread the
second dlopen succeeds even though the ctors haven't finished
running, not sure if that's better than deadlock, in any case
running user code from libc is problematic, ctors vs dlopen
seems to be a disaster)


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-13 11:09 dlopen deadlock Szabolcs Nagy
@ 2016-01-14 22:41 ` Rich Felker
  2016-01-15  0:31   ` Szabolcs Nagy
  0 siblings, 1 reply; 8+ messages in thread
From: Rich Felker @ 2016-01-14 22:41 UTC (permalink / raw)
  To: musl

On Wed, Jan 13, 2016 at 12:09:37PM +0100, Szabolcs Nagy wrote:
> This bug i reported against glibc also affects musl:
> https://sourceware.org/bugzilla/show_bug.cgi?id=19448
> 
> in case of musl it's not the global load lock, but the
> init_fini_lock that causes the problem.

The deadlock happens when a ctor makes a thread that calls dlopen and
does not return until the new thread's dlopen returns, right?

> the multi-threadedness detection is also problematic in
> do_init_fini:
> 
> 	need_locking = has_threads
> 	if (need_locking)
> 		lock(init_fini_lock)
> 	for all deps
> 		run_ctors(dep)
> 		if (!need_locking && has_threads)
> 			need_locking = 1
> 			lock(init_fini_lock)
> 	if (need_locking)
> 		unlock(init_fini_lock)
> 
> checking for threads after ctors are run is too late if
> the ctors may start new threads that can dlopen libs with
> common deps with the currently loaded lib.

The logic seems unnecessary now that there's no lazy/optional thread
pointer initialization (originally it was a problem because
pthread_mutex_lock with a recursive mutex needed to access TLS for the
owner tid, but TLS might not have been initialized when the ctors ran)
but I don't immediately see how it's harmful. The only state the lock
protects is p->constructed and the fini chain (fini_head,
p->fini_next) which are all used before the ctors run. The need for
locking is re-evaluated after the ctors run.

> one solution i can think of is to have an init_fini_lock
> for each dso, then the deadlock only happens if a ctor
> tries to dlopen its own lib (directly or indirectly)
> which is nonsense (the library depends on itself being
> loaded)

The lock has to protect the fini chain linked list (used to control
order of dtors) so I don't think having it be per-dso is a
possibility.

Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-14 22:41 ` Rich Felker
@ 2016-01-15  0:31   ` Szabolcs Nagy
  2016-01-15  2:47     ` Rich Felker
  2016-03-23 10:55     ` Szabolcs Nagy
  0 siblings, 2 replies; 8+ messages in thread
From: Szabolcs Nagy @ 2016-01-15  0:31 UTC (permalink / raw)
  To: musl

* Rich Felker <dalias@libc.org> [2016-01-14 17:41:15 -0500]:
> On Wed, Jan 13, 2016 at 12:09:37PM +0100, Szabolcs Nagy wrote:
> > This bug i reported against glibc also affects musl:
> > https://sourceware.org/bugzilla/show_bug.cgi?id=19448
> > 
> > in case of musl it's not the global load lock, but the
> > init_fini_lock that causes the problem.
> 
> The deadlock happens when a ctor makes a thread that calls dlopen and
> does not return until the new thread's dlopen returns, right?
> 

yes
(not a common scenario)

> > the multi-threadedness detection is also problematic in
> > do_init_fini:
> > 
> > 	need_locking = has_threads
> > 	if (need_locking)
> > 		lock(init_fini_lock)
> > 	for all deps
> > 		run_ctors(dep)
> > 		if (!need_locking && has_threads)
> > 			need_locking = 1
> > 			lock(init_fini_lock)
> > 	if (need_locking)
> > 		unlock(init_fini_lock)
> > 
> > checking for threads after ctors are run is too late if
> > the ctors may start new threads that can dlopen libs with
> > common deps with the currently loaded lib.
> 
> The logic seems unnecessary now that there's no lazy/optional thread
> pointer initialization (originally it was a problem because
> pthread_mutex_lock with a recursive mutex needed to access TLS for the
> owner tid, but TLS might not have been initialized when the ctors ran)
> but I don't immediately see how it's harmful. The only state the lock
> protects is p->constructed and the fini chain (fini_head,
> p->fini_next) which are all used before the ctors run. The need for
> locking is re-evaluated after the ctors run.
> 

hm ok
i thought the ctors of the same lib might end up being
called twice, concurrently, but i see p->constructed
protects against that

> > one solution i can think of is to have an init_fini_lock
> > for each dso, then the deadlock only happens if a ctor
> > tries to dlopen its own lib (directly or indirectly)
> > which is nonsense (the library depends on itself being
> > loaded)
> 
> The lock has to protect the fini chain linked list (used to control
> order of dtors) so I don't think having it be per-dso is a
> possibility.
> 

i guess using lockfree atomics could solve the deadlock then

> Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-15  0:31   ` Szabolcs Nagy
@ 2016-01-15  2:47     ` Rich Felker
  2016-01-15  4:59       ` Rich Felker
  2016-01-15 10:58       ` Szabolcs Nagy
  2016-03-23 10:55     ` Szabolcs Nagy
  1 sibling, 2 replies; 8+ messages in thread
From: Rich Felker @ 2016-01-15  2:47 UTC (permalink / raw)
  To: musl

On Fri, Jan 15, 2016 at 01:31:49AM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2016-01-14 17:41:15 -0500]:
> > On Wed, Jan 13, 2016 at 12:09:37PM +0100, Szabolcs Nagy wrote:
> > > This bug i reported against glibc also affects musl:
> > > https://sourceware.org/bugzilla/show_bug.cgi?id=19448
> > > 
> > > in case of musl it's not the global load lock, but the
> > > init_fini_lock that causes the problem.
> > 
> > The deadlock happens when a ctor makes a thread that calls dlopen and
> > does not return until the new thread's dlopen returns, right?
> > 
> 
> yes
> (not a common scenario)
> 
> > > the multi-threadedness detection is also problematic in
> > > do_init_fini:
> > > 
> > > 	need_locking = has_threads
> > > 	if (need_locking)
> > > 		lock(init_fini_lock)
> > > 	for all deps
> > > 		run_ctors(dep)
> > > 		if (!need_locking && has_threads)
> > > 			need_locking = 1
> > > 			lock(init_fini_lock)
> > > 	if (need_locking)
> > > 		unlock(init_fini_lock)
> > > 
> > > checking for threads after ctors are run is too late if
> > > the ctors may start new threads that can dlopen libs with
> > > common deps with the currently loaded lib.
> > 
> > The logic seems unnecessary now that there's no lazy/optional thread
> > pointer initialization (originally it was a problem because
> > pthread_mutex_lock with a recursive mutex needed to access TLS for the
> > owner tid, but TLS might not have been initialized when the ctors ran)
> > but I don't immediately see how it's harmful. The only state the lock
> > protects is p->constructed and the fini chain (fini_head,
> > p->fini_next) which are all used before the ctors run. The need for
> > locking is re-evaluated after the ctors run.
> > 
> 
> hm ok
> i thought the ctors of the same lib might end up being
> called twice, concurrently, but i see p->constructed
> protects against that
> 
> > > one solution i can think of is to have an init_fini_lock
> > > for each dso, then the deadlock only happens if a ctor
> > > tries to dlopen its own lib (directly or indirectly)
> > > which is nonsense (the library depends on itself being
> > > loaded)
> > 
> > The lock has to protect the fini chain linked list (used to control
> > order of dtors) so I don't think having it be per-dso is a
> > possibility.
> > 
> 
> i guess using lockfree atomics could solve the deadlock then

I don't think atomics help. We could achieve the same thing as atomics
by just taking and releasing the lock on each iteration when modifying
the lock-protected state, but not holding the lock while calling the
actual ctors.

From what I can see/remember, the reason I didn't write the code that
way is that we don't want dlopen to return before all ctors have run
-- or at least started running, in the case of a recursive call to
dlopen. If the lock were taken/released on each iteration, two threads
simultaneously calling dlopen on the same library libA that depends on
libB could each run A's ctors and B's ctors and either of them could
return from dlopen before the other finished, resulting in library
code running without its ctors having finished.

The problem is that excluding multiple threads from running
possibly-unrelated ctors simultaneously is wrong, and marking a
library constructed as soon as its ctors start is also wrong (at least
once this big-hammer lock is fixed). Instead we should be doing some
sort of proper dependency-graph tracking and ensuring that a dlopen
cannot return until all dependencies have completed their ctors,
except in the special case of recursion, in which case it's acceptable
for libX's ctors to load a libY that depends on libX, where libX
should be treated as "already constructed" (it's a bug in libX if it
has not already completed any initialization that libY might depend
on). However I don't see any reasonable way to track this kind of
relationship when it happens 'indirectly-recursively' via a new
thread. It may just be that such a case should deadlock. However,
dlopen of separate libs which are unrelated in any dependency sense to
the caller should _not_ deadlock just because it happens from a thread
created by a ctor...

Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-15  2:47     ` Rich Felker
@ 2016-01-15  4:59       ` Rich Felker
  2016-01-15 10:58       ` Szabolcs Nagy
  1 sibling, 0 replies; 8+ messages in thread
From: Rich Felker @ 2016-01-15  4:59 UTC (permalink / raw)
  To: musl

On Thu, Jan 14, 2016 at 09:47:59PM -0500, Rich Felker wrote:
> > > > one solution i can think of is to have an init_fini_lock
> > > > for each dso, then the deadlock only happens if a ctor
> > > > tries to dlopen its own lib (directly or indirectly)
> > > > which is nonsense (the library depends on itself being
> > > > loaded)
> > > 
> > > The lock has to protect the fini chain linked list (used to control
> > > order of dtors) so I don't think having it be per-dso is a
> > > possibility.
> > > 
> > 
> > i guess using lockfree atomics could solve the deadlock then
> 
> I don't think atomics help. We could achieve the same thing as atomics
> by just taking and releasing the lock on each iteration when modifying
> the lock-protected state, but not holding the lock while calling the
> actual ctors.
> 
> >From what I can see/remember, the reason I didn't write the code that
> way is that we don't want dlopen to return before all ctors have run
> -- or at least started running, in the case of a recursive call to
> dlopen. If the lock were taken/released on each iteration, two threads
> simultaneously calling dlopen on the same library libA that depends on
> libB could each run A's ctors and B's ctors and either of them could
> return from dlopen before the other finished, resulting in library
> code running without its ctors having finished.
> 
> The problem is that excluding multiple threads from running
> possibly-unrelated ctors simultaneously is wrong, and marking a
> library constructed as soon as its ctors start is also wrong (at least
> once this big-hammer lock is fixed). Instead we should be doing some
> sort of proper dependency-graph tracking and ensuring that a dlopen
> cannot return until all dependencies have completed their ctors,
> except in the special case of recursion, in which case it's acceptable
> for libX's ctors to load a libY that depends on libX, where libX
> should be treated as "already constructed" (it's a bug in libX if it
> has not already completed any initialization that libY might depend
> on). However I don't see any reasonable way to track this kind of
> relationship when it happens 'indirectly-recursively' via a new
> thread. It may just be that such a case should deadlock. However,
> dlopen of separate libs which are unrelated in any dependency sense to
> the caller should _not_ deadlock just because it happens from a thread
> created by a ctor...

Some relevant history:

commit f4f77c068f1058d202a976678fce2617d59c0ff6
fix/improve shared library ctor/dtor handling, allow recursive dlopen

commit 509b50eda8ea7d4a28f738e4cf8ea98d25959f00
fix missing synchronization in calls from dynamic linker to global ctors

Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-15  2:47     ` Rich Felker
  2016-01-15  4:59       ` Rich Felker
@ 2016-01-15 10:58       ` Szabolcs Nagy
  2016-01-15 19:16         ` Rich Felker
  1 sibling, 1 reply; 8+ messages in thread
From: Szabolcs Nagy @ 2016-01-15 10:58 UTC (permalink / raw)
  To: musl

* Rich Felker <dalias@libc.org> [2016-01-14 21:47:59 -0500]:
> On Fri, Jan 15, 2016 at 01:31:49AM +0100, Szabolcs Nagy wrote:
> > i guess using lockfree atomics could solve the deadlock then
> 
> I don't think atomics help. We could achieve the same thing as atomics
> by just taking and releasing the lock on each iteration when modifying
> the lock-protected state, but not holding the lock while calling the
> actual ctors.
> 
> >From what I can see/remember, the reason I didn't write the code that
> way is that we don't want dlopen to return before all ctors have run
> -- or at least started running, in the case of a recursive call to
> dlopen. If the lock were taken/released on each iteration, two threads
> simultaneously calling dlopen on the same library libA that depends on
> libB could each run A's ctors and B's ctors and either of them could
> return from dlopen before the other finished, resulting in library
> code running without its ctors having finished.
> 

ok then use cond vars:

for (; p; p=p->next) {
	lock(m);
	if (p->ctors_started) {
		while (!p->ctors_done)
			cond_wait(p->cond, m);
		unlock(m);
		continue;
	}
	p->ctors_started = 1;
	// update fini_head...
	unlock(m);

	run_ctors(p); // no global lock is held

	lock(m);
	p->ctors_done = 1;
	cond_broadcast(p->cond);
	unlock(m);
}

> The problem is that excluding multiple threads from running
> possibly-unrelated ctors simultaneously is wrong, and marking a
> library constructed as soon as its ctors start is also wrong (at least
> once this big-hammer lock is fixed). Instead we should be doing some
> sort of proper dependency-graph tracking and ensuring that a dlopen
> cannot return until all dependencies have completed their ctors,

this works with cond vars.

> except in the special case of recursion, in which case it's acceptable
> for libX's ctors to load a libY that depends on libX, where libX
> should be treated as "already constructed" (it's a bug in libX if it
> has not already completed any initialization that libY might depend

this does not work.

> on). However I don't see any reasonable way to track this kind of
> relationship when it happens 'indirectly-recursively' via a new
> thread. It may just be that such a case should deadlock. However,

if the dso currently being constructed is stored in tls
and this info is passed down to every new thread then
dlopen could skip this dso.

of course there might be several layers of recursive
dlopen calls so it is a list of dsos under construction
so like

self = pthread_self();
for (; p; p=p->next) {
	if (contains(self->constructing, p))
		continue;
	...
	old = p->next_constructing = self->constructing;
	self->constructing = p;
	run_ctors(p);
	self->constructing = old;
	...
}

and self->constructing needs to be set up in pthread_create.

bit overkill for a corner case that never happens
but it seems possible to do.

> dlopen of separate libs which are unrelated in any dependency sense to
> the caller should _not_ deadlock just because it happens from a thread
> created by a ctor...

i think this works with the simple cond vars approach.

> 
> Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-15 10:58       ` Szabolcs Nagy
@ 2016-01-15 19:16         ` Rich Felker
  0 siblings, 0 replies; 8+ messages in thread
From: Rich Felker @ 2016-01-15 19:16 UTC (permalink / raw)
  To: musl

On Fri, Jan 15, 2016 at 11:58:19AM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2016-01-14 21:47:59 -0500]:
> > On Fri, Jan 15, 2016 at 01:31:49AM +0100, Szabolcs Nagy wrote:
> > > i guess using lockfree atomics could solve the deadlock then
> > 
> > I don't think atomics help. We could achieve the same thing as atomics
> > by just taking and releasing the lock on each iteration when modifying
> > the lock-protected state, but not holding the lock while calling the
> > actual ctors.
> > 
> > >From what I can see/remember, the reason I didn't write the code that
> > way is that we don't want dlopen to return before all ctors have run
> > -- or at least started running, in the case of a recursive call to
> > dlopen. If the lock were taken/released on each iteration, two threads
> > simultaneously calling dlopen on the same library libA that depends on
> > libB could each run A's ctors and B's ctors and either of them could
> > return from dlopen before the other finished, resulting in library
> > code running without its ctors having finished.
> > 
> 
> ok then use cond vars:
> 
> for (; p; p=p->next) {
> 	lock(m);
> 	if (p->ctors_started) {
> 		while (!p->ctors_done)
> 			cond_wait(p->cond, m);
> 		unlock(m);
> 		continue;
> 	}
> 	p->ctors_started = 1;
> 	// update fini_head...
> 	unlock(m);
> 
> 	run_ctors(p); // no global lock is held
> 
> 	lock(m);
> 	p->ctors_done = 1;
> 	cond_broadcast(p->cond);
> 	unlock(m);
> }

I agree that cond vars solve the problem. (A semaphore per module
would probably also solve it, but using a cond var that's shared
between all libs would waste less storage at the expense of extremely
rare spurious wakes.) However, the above pseudo-code is probably not
quite right. Only dependencies should be waited on and there are
probably order issues to consider. But yes, this is the right basic
idea for what we should do.

> > The problem is that excluding multiple threads from running
> > possibly-unrelated ctors simultaneously is wrong, and marking a
> > library constructed as soon as its ctors start is also wrong (at least
> > once this big-hammer lock is fixed). Instead we should be doing some
> > sort of proper dependency-graph tracking and ensuring that a dlopen
> > cannot return until all dependencies have completed their ctors,
> 
> this works with cond vars.

OK.

> > except in the special case of recursion, in which case it's acceptable
> > for libX's ctors to load a libY that depends on libX, where libX
> > should be treated as "already constructed" (it's a bug in libX if it
> > has not already completed any initialization that libY might depend
> 
> this does not work.

I think it can be made to work, at least for recursive calls within
the same thread (as opposed to the evil "pthread_create from ctor"
issue) just by tracking the tid of the thread that's "in progress"
running a ctor and refraining from waiting on it when it's the current
thread.

> > on). However I don't see any reasonable way to track this kind of
> > relationship when it happens 'indirectly-recursively' via a new
> > thread. It may just be that such a case should deadlock. However,
> 
> if the dso currently being constructed is stored in tls
> and this info is passed down to every new thread then
> dlopen could skip this dso.
> 
> of course there might be several layers of recursive
> dlopen calls so it is a list of dsos under construction
> so like
> 
> self = pthread_self();
> for (; p; p=p->next) {
> 	if (contains(self->constructing, p))
> 		continue;
> 	...
> 	old = p->next_constructing = self->constructing;
> 	self->constructing = p;
> 	run_ctors(p);
> 	self->constructing = old;
> 	...
> }
> 
> and self->constructing needs to be set up in pthread_create.
> 
> bit overkill for a corner case that never happens
> but it seems possible to do.

Agreed. And I suspect it introduces more broken corner cases than it
fixes...

> > dlopen of separate libs which are unrelated in any dependency sense to
> > the caller should _not_ deadlock just because it happens from a thread
> > created by a ctor...
> 
> i think this works with the simple cond vars approach.

Agreed.

Rich


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: dlopen deadlock
  2016-01-15  0:31   ` Szabolcs Nagy
  2016-01-15  2:47     ` Rich Felker
@ 2016-03-23 10:55     ` Szabolcs Nagy
  1 sibling, 0 replies; 8+ messages in thread
From: Szabolcs Nagy @ 2016-03-23 10:55 UTC (permalink / raw)
  To: musl

* Szabolcs Nagy <nsz@port70.net> [2016-01-15 01:31:49 +0100]:
> * Rich Felker <dalias@libc.org> [2016-01-14 17:41:15 -0500]:
> > On Wed, Jan 13, 2016 at 12:09:37PM +0100, Szabolcs Nagy wrote:
> > > This bug i reported against glibc also affects musl:
> > > https://sourceware.org/bugzilla/show_bug.cgi?id=19448
> > > 
> > > in case of musl it's not the global load lock, but the
> > > init_fini_lock that causes the problem.
> > 
> > The deadlock happens when a ctor makes a thread that calls dlopen and
> > does not return until the new thread's dlopen returns, right?
> > 
> 
> yes
> (not a common scenario)

more general deadlock case:
ctor waits for another thread that is in dlopen waiting
for the init_fini_lock.

this actually happened on glibc, but through interposed
malloc instead of a ctor
https://sourceware.org/ml/libc-alpha/2016-03/msg00594.html


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-03-23 10:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-13 11:09 dlopen deadlock Szabolcs Nagy
2016-01-14 22:41 ` Rich Felker
2016-01-15  0:31   ` Szabolcs Nagy
2016-01-15  2:47     ` Rich Felker
2016-01-15  4:59       ` Rich Felker
2016-01-15 10:58       ` Szabolcs Nagy
2016-01-15 19:16         ` Rich Felker
2016-03-23 10:55     ` Szabolcs Nagy

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).