From: Rich Felker <dalias@libc.org>
To: musl@lists.openwall.com
Subject: Re: [musl] Illegal killlock skipping when transitioning to single-threaded state
Date: Wed, 5 Oct 2022 10:37:32 -0400 [thread overview]
Message-ID: <20221005143730.GT29905@brightrain.aerifal.cx> (raw)
In-Reply-To: <20221005140303.GS29905@brightrain.aerifal.cx>
[-- Attachment #1: Type: text/plain, Size: 3732 bytes --]
On Wed, Oct 05, 2022 at 10:03:03AM -0400, Rich Felker wrote:
> On Wed, Oct 05, 2022 at 03:10:09PM +0300, Alexey Izbyshev wrote:
> > On 2022-10-05 04:00, Rich Felker wrote:
> > >On Wed, Sep 07, 2022 at 03:46:53AM +0300, Alexey Izbyshev wrote:
> > >>Reordering the "libc.need_locks = -1" assignment and
> > >>UNLOCK(E->killlock) and providing a store barrier between them
> > >>should fix the issue.
> > >
> > >Back to this, because it's immediately actionable without resolving
> > >the aarch64 atomics issue:
> > >
> > >Do you have something in mind for how this reordering is done, since
> > >there are other intervening steps that are potentially ordered with
> > >respect to either or both? I don't think there is actually any
> > >ordering constraint at all on the unlocking of killlock (with the
> > >accompanying assignment self->tid=0 kept with it) except that it be
> > >past the point where we are committed to the thread terminating
> > >without executing any more application code. So my leaning would be to
> > >move this block from the end of pthread_exit up to right after the
> > >point-of-no-return comment.
> > >
> > This was my conclusion as well back when I looked at it before
> > sending the report.
> >
> > I was initially concerned about whether reordering with
> > a_store(&self->detach_state, DT_EXITED) could cause an unwanted
> > observable change (pthread_tryjoin_np() returning EBUSY after a
> > pthread function acting on tid like pthread_getschedparam() returns
> > ESRCH), but no, pthread_tryjoin_np() will block/trap if the thread
> > is not DT_JOINABLE.
> >
> > >Unfortunately while reading this I found another bug, this time a lock
> > >order one. __dl_thread_cleanup() takes a lock while the thread list
> > >lock is already held, but fork takes these in the opposite order. I
> > >think the lock here could be dropped and replaced with an atomic-cas
> > >list head, but that's rather messy and I'm open to other ideas.
> > >
> > I'm not sure why using a lock-free list is messy, it seems like a
> > perfect fit here to me.
>
> Just in general I've tried to reduce the direct use of atomics and use
> high-level primitives, because (as this thread is evidence of) I find
> the reasoning about direct use of atomics and their correctness to be
> difficult and inaccessible to a lot of people who would otherwise be
> successful readers of the code. But you're right that it's a "good
> match" for the problem at hand.
>
> > However, doesn't __dl_vseterr() use the libc-internal allocator
> > after 34952fe5de44a833370cbe87b63fb8eec61466d7? If so, the problem
> > that freebuf_queue was originally solving doesn't exist anymore. We
> > still can't call the allocator after __tl_lock(), but maybe this
> > whole free deferral approach can be reconsidered?
>
> I almost made that change when the MT-fork changes were done, but
> didn't because it was wrong. I'm not sure if I documented this
> anywhere (it might be in mail threads related to that or IRC) but it
> was probably because it would need to take malloc locks with the
> thread list lock held, which isn't allowed.
>
> It would be nice if we could get rid of the deferred freeing here, but
> I don't see a good way. The reason we can't free the buffer until
> after the thread list lock is taken is that it's only freeable if this
> isn't the last exiting thread. If it is the last exiting thread, the
> buffer contents still need to be present for the atexit handlers to
> see. And whether this is the last exiting thread is only
> stable/determinate as long as the thread list lock is held.
Proposed patch with atomic list attached, along with a stupid test
program (to be run under a debugger to see anything happening).
Rich
[-- Attachment #2: dlerror_free.c --]
[-- Type: text/plain, Size: 646 bytes --]
#include <dlfcn.h>
#include <pthread.h>
#include <stdio.h>
#include <semaphore.h>
void *thread_func(void *p)
{
sem_t *sem = p;
char buf[100];
sem_wait(&sem[0]);
snprintf(buf, sizeof buf, "/dev/null/noexist_%p", (void *)buf);
dlopen(buf, RTLD_NOW|RTLD_LOCAL);
sem_post(&sem[1]);
}
#define NT 30
int main()
{
sem_t sem[2];
int i;
sem_init(&sem[0], 0, 0);
sem_init(&sem[1], 0, 0);
for (i=0; i<NT; i++) {
pthread_t td;
pthread_create(&td, 0, thread_func, sem);
}
for (i=0; i<NT; i++) sem_post(&sem[0]);
for (i=0; i<NT; i++) sem_wait(&sem[1]);
dlopen("/dev/null/noexist_main", RTLD_NOW|RTLD_LOCAL);
printf("%s\n", dlerror());
}
[-- Attachment #3: dlerror_free.diff --]
[-- Type: text/plain, Size: 2760 bytes --]
diff --git a/src/internal/fork_impl.h b/src/internal/fork_impl.h
index 5892c13b..ae3a79e5 100644
--- a/src/internal/fork_impl.h
+++ b/src/internal/fork_impl.h
@@ -2,7 +2,6 @@
extern hidden volatile int *const __at_quick_exit_lockptr;
extern hidden volatile int *const __atexit_lockptr;
-extern hidden volatile int *const __dlerror_lockptr;
extern hidden volatile int *const __gettext_lockptr;
extern hidden volatile int *const __locale_lockptr;
extern hidden volatile int *const __random_lockptr;
diff --git a/src/ldso/dlerror.c b/src/ldso/dlerror.c
index afe59253..30a33f33 100644
--- a/src/ldso/dlerror.c
+++ b/src/ldso/dlerror.c
@@ -23,28 +23,31 @@ char *dlerror()
return s;
}
-static volatile int freebuf_queue_lock[1];
-static void **freebuf_queue;
-volatile int *const __dlerror_lockptr = freebuf_queue_lock;
+/* Atomic singly-linked list, used to store list of thread-local dlerror
+ * buffers for deferred free. They cannot be freed at thread exit time
+ * because, by the time it's known they can be freed, the exiting thread
+ * is in a highly restrictive context where it cannot call (even the
+ * libc-internal) free. It also can't take locks; thus the atomic list. */
+
+static void *volatile freebuf_queue;
void __dl_thread_cleanup(void)
{
pthread_t self = __pthread_self();
- if (self->dlerror_buf && self->dlerror_buf != (void *)-1) {
- LOCK(freebuf_queue_lock);
- void **p = (void **)self->dlerror_buf;
- *p = freebuf_queue;
- freebuf_queue = p;
- UNLOCK(freebuf_queue_lock);
- }
+ if (!self->dlerror_buf || self->dlerror_buf == (void *)-1)
+ return;
+ void *h;
+ do {
+ h = freebuf_queue;
+ *(void **)self->dlerror_buf = h;
+ } while (a_cas_p(&freebuf_queue, h, self->dlerror_buf) != h);
}
hidden void __dl_vseterr(const char *fmt, va_list ap)
{
- LOCK(freebuf_queue_lock);
- void **q = freebuf_queue;
- freebuf_queue = 0;
- UNLOCK(freebuf_queue_lock);
+ void **q;
+ do q = freebuf_queue;
+ while (q && a_cas_p(&freebuf_queue, q, 0) != q);
while (q) {
void **p = *q;
diff --git a/src/process/fork.c b/src/process/fork.c
index 54bc2892..ff71845c 100644
--- a/src/process/fork.c
+++ b/src/process/fork.c
@@ -9,7 +9,6 @@ static volatile int *const dummy_lockptr = 0;
weak_alias(dummy_lockptr, __at_quick_exit_lockptr);
weak_alias(dummy_lockptr, __atexit_lockptr);
-weak_alias(dummy_lockptr, __dlerror_lockptr);
weak_alias(dummy_lockptr, __gettext_lockptr);
weak_alias(dummy_lockptr, __locale_lockptr);
weak_alias(dummy_lockptr, __random_lockptr);
@@ -24,7 +23,6 @@ weak_alias(dummy_lockptr, __vmlock_lockptr);
static volatile int *const *const atfork_locks[] = {
&__at_quick_exit_lockptr,
&__atexit_lockptr,
- &__dlerror_lockptr,
&__gettext_lockptr,
&__locale_lockptr,
&__random_lockptr,
next prev parent reply other threads:[~2022-10-05 14:37 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-07 0:46 Alexey Izbyshev
2022-09-19 15:29 ` Rich Felker
2022-10-03 6:16 ` Alexey Izbyshev
2022-10-03 12:33 ` Rich Felker
2022-10-03 13:26 ` Szabolcs Nagy
2022-10-03 21:27 ` Szabolcs Nagy
2022-10-03 22:54 ` Rich Felker
2022-10-03 23:05 ` Rich Felker
2022-10-04 13:50 ` Alexey Izbyshev
2022-10-04 14:12 ` Rich Felker
2022-10-04 14:19 ` Rich Felker
2022-10-04 15:43 ` Alexey Izbyshev
2022-10-04 15:57 ` Rich Felker
2022-10-04 18:15 ` Alexey Izbyshev
2022-10-04 23:21 ` Rich Felker
2022-10-04 16:24 ` James Y Knight
2022-10-04 16:45 ` Rich Felker
2022-10-05 13:52 ` James Y Knight
2022-10-04 16:01 ` Alexey Izbyshev
2022-10-04 2:58 ` Rich Felker
2022-10-04 3:00 ` Rich Felker
2022-10-04 4:59 ` Rich Felker
2022-10-04 8:16 ` Szabolcs Nagy
2022-10-04 10:18 ` Alexey Izbyshev
2022-10-04 5:16 ` Alexey Izbyshev
2022-10-04 8:31 ` Szabolcs Nagy
2022-10-04 10:28 ` Alexey Izbyshev
2022-10-05 1:00 ` Rich Felker
2022-10-05 12:10 ` Alexey Izbyshev
2022-10-05 14:03 ` Rich Felker
2022-10-05 14:37 ` Rich Felker [this message]
2022-10-05 16:23 ` Alexey Izbyshev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221005143730.GT29905@brightrain.aerifal.cx \
--to=dalias@libc.org \
--cc=musl@lists.openwall.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/musl/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).