mailing list of musl libc
 help / color / mirror / code / Atom feed
* [musl] [PATCH] mq_notify: fix close/recv race on failure path
@ 2022-11-09 10:46 Alexey Izbyshev
  2022-12-14  2:26 ` Rich Felker
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2022-11-09 10:46 UTC (permalink / raw)
  To: musl

In case of failure mq_notify closes the socket immediately after
sending a cancellation request to the worker thread that is going to
call or have already called recv on that socket. Even if we don't
consider the kernel behavior when the only descriptor to an object that
is being used in a system call is closed, if the socket descriptor is
closed before the kernel looks at it, another thread could open a
descriptor with the same value in the meantime, resulting in recv
acting on a wrong object.

Fix the race by moving pthread_cancel call before the barrier wait to
guarantee that the cancellation flag is set before the worker thread
enters recv.
---
Other ways to fix this:

* Remove the racing close call from mq_notify and surround recv
  with pthread_cleanup_push/pop.

* Make the worker thread joinable initially, join it before closing
  the socket on the failure path, and detach it on the happy path.
  This would also require disabling cancellation around join/detach
  to ensure that mq_notify itself is not cancelled in an inappropriate
  state.

Alexey
---
 src/mq/mq_notify.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 221591c7..177caccd 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -34,7 +34,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	struct args args = { .sev = sev };
 	pthread_attr_t attr;
 	pthread_t td;
-	int s;
+	int s, r;
 	struct sigevent sev2;
 	static const char zeros[32];
 
@@ -56,18 +56,19 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 		return -1;
 	}
 
-	pthread_barrier_wait(&args.barrier);
-	pthread_barrier_destroy(&args.barrier);
-
 	sev2.sigev_notify = SIGEV_THREAD;
 	sev2.sigev_signo = s;
 	sev2.sigev_value.sival_ptr = (void *)&zeros;
 
-	if (syscall(SYS_mq_notify, mqd, &sev2) < 0) {
+	r = syscall(SYS_mq_notify, mqd, &sev2);
+	if (r < 0)
 		pthread_cancel(td);
+
+	pthread_barrier_wait(&args.barrier);
+	pthread_barrier_destroy(&args.barrier);
+
+	if (r < 0)
 		__syscall(SYS_close, s);
-		return -1;
-	}
 
-	return 0;
+	return r;
 }
-- 
2.37.2


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2022-11-09 10:46 [musl] [PATCH] mq_notify: fix close/recv race on failure path Alexey Izbyshev
@ 2022-12-14  2:26 ` Rich Felker
  2022-12-14  6:49   ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Rich Felker @ 2022-12-14  2:26 UTC (permalink / raw)
  To: musl

On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
> In case of failure mq_notify closes the socket immediately after
> sending a cancellation request to the worker thread that is going to
> call or have already called recv on that socket. Even if we don't
> consider the kernel behavior when the only descriptor to an object that
> is being used in a system call is closed, if the socket descriptor is
> closed before the kernel looks at it, another thread could open a
> descriptor with the same value in the meantime, resulting in recv
> acting on a wrong object.
> 
> Fix the race by moving pthread_cancel call before the barrier wait to
> guarantee that the cancellation flag is set before the worker thread
> enters recv.
> ---
> Other ways to fix this:
> 
> * Remove the racing close call from mq_notify and surround recv
>   with pthread_cleanup_push/pop.
> 
> * Make the worker thread joinable initially, join it before closing
>   the socket on the failure path, and detach it on the happy path.
>   This would also require disabling cancellation around join/detach
>   to ensure that mq_notify itself is not cancelled in an inappropriate
>   state.

I'd put this aside for a while because of the pthread barrier
involvement I kinda didn't want to deal with. The fix you have sounds
like it works, but I think I'd rather pursue one of the other
approaches, probably the joinable thread one.

At present, the implementation of barriers seems to be buggy (I need
to dig back up the post about that), and they're also a really
expensive synchronization tool that goes both directions where we
really only need one direction (notifying the caller we're done
consuming the args). I'd rather switch to a semaphore, which is the
lightest and most idiomatic (at least per present-day musl idioms) way
to do this.

Using a joinable thread also lets us ensure we don't leave around
threads that are waiting to be scheduled just to exit on failure
return. Depending on scheduling attributes, this probably could be
bad.

(Also, arguably we should perhaps start the thread with unchanged
scheduling attributes and only change to the caller-provided ones
after successfully making the SYS_mq_notify syscall -- bleh. But
that's a separate topic.)

Rich

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2022-12-14  2:26 ` Rich Felker
@ 2022-12-14  6:49   ` Alexey Izbyshev
  2023-02-10 16:29     ` Rich Felker
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2022-12-14  6:49 UTC (permalink / raw)
  To: musl

On 2022-12-14 05:26, Rich Felker wrote:
> On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
>> In case of failure mq_notify closes the socket immediately after
>> sending a cancellation request to the worker thread that is going to
>> call or have already called recv on that socket. Even if we don't
>> consider the kernel behavior when the only descriptor to an object 
>> that
>> is being used in a system call is closed, if the socket descriptor is
>> closed before the kernel looks at it, another thread could open a
>> descriptor with the same value in the meantime, resulting in recv
>> acting on a wrong object.
>> 
>> Fix the race by moving pthread_cancel call before the barrier wait to
>> guarantee that the cancellation flag is set before the worker thread
>> enters recv.
>> ---
>> Other ways to fix this:
>> 
>> * Remove the racing close call from mq_notify and surround recv
>>   with pthread_cleanup_push/pop.
>> 
>> * Make the worker thread joinable initially, join it before closing
>>   the socket on the failure path, and detach it on the happy path.
>>   This would also require disabling cancellation around join/detach
>>   to ensure that mq_notify itself is not cancelled in an inappropriate
>>   state.
> 
> I'd put this aside for a while because of the pthread barrier
> involvement I kinda didn't want to deal with. The fix you have sounds
> like it works, but I think I'd rather pursue one of the other
> approaches, probably the joinable thread one.
> 
> At present, the implementation of barriers seems to be buggy (I need
> to dig back up the post about that), and they're also a really
> expensive synchronization tool that goes both directions where we
> really only need one direction (notifying the caller we're done
> consuming the args). I'd rather switch to a semaphore, which is the
> lightest and most idiomatic (at least per present-day musl idioms) way
> to do this.
> 
This sounds good to me. The same approach can also be used in 
timer_create (assuming it's acceptable to add dependency on 
pthread_cancel to that code).

> Using a joinable thread also lets us ensure we don't leave around
> threads that are waiting to be scheduled just to exit on failure
> return. Depending on scheduling attributes, this probably could be
> bad.
> 
I also prefer this approach, though mostly for aesthetic reasons (I 
haven't thought about the scheduling behavior). I didn't use it only 
because I felt it's a "logically larger" change than simply moving the 
pthread_barrier_wait call. And I wasn't aware that barriers are buggy in 
musl.

Thanks,
Alexey

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2022-12-14  6:49   ` Alexey Izbyshev
@ 2023-02-10 16:29     ` Rich Felker
  2023-02-11 14:45       ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-10 16:29 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

[-- Attachment #1: Type: text/plain, Size: 2780 bytes --]

On Wed, Dec 14, 2022 at 09:49:26AM +0300, Alexey Izbyshev wrote:
> On 2022-12-14 05:26, Rich Felker wrote:
> >On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
> >>In case of failure mq_notify closes the socket immediately after
> >>sending a cancellation request to the worker thread that is going to
> >>call or have already called recv on that socket. Even if we don't
> >>consider the kernel behavior when the only descriptor to an
> >>object that
> >>is being used in a system call is closed, if the socket descriptor is
> >>closed before the kernel looks at it, another thread could open a
> >>descriptor with the same value in the meantime, resulting in recv
> >>acting on a wrong object.
> >>
> >>Fix the race by moving pthread_cancel call before the barrier wait to
> >>guarantee that the cancellation flag is set before the worker thread
> >>enters recv.
> >>---
> >>Other ways to fix this:
> >>
> >>* Remove the racing close call from mq_notify and surround recv
> >>  with pthread_cleanup_push/pop.
> >>
> >>* Make the worker thread joinable initially, join it before closing
> >>  the socket on the failure path, and detach it on the happy path.
> >>  This would also require disabling cancellation around join/detach
> >>  to ensure that mq_notify itself is not cancelled in an inappropriate
> >>  state.
> >
> >I'd put this aside for a while because of the pthread barrier
> >involvement I kinda didn't want to deal with. The fix you have sounds
> >like it works, but I think I'd rather pursue one of the other
> >approaches, probably the joinable thread one.
> >
> >At present, the implementation of barriers seems to be buggy (I need
> >to dig back up the post about that), and they're also a really
> >expensive synchronization tool that goes both directions where we
> >really only need one direction (notifying the caller we're done
> >consuming the args). I'd rather switch to a semaphore, which is the
> >lightest and most idiomatic (at least per present-day musl idioms) way
> >to do this.
> >
> This sounds good to me. The same approach can also be used in
> timer_create (assuming it's acceptable to add dependency on
> pthread_cancel to that code).
> 
> >Using a joinable thread also lets us ensure we don't leave around
> >threads that are waiting to be scheduled just to exit on failure
> >return. Depending on scheduling attributes, this probably could be
> >bad.
> >
> I also prefer this approach, though mostly for aesthetic reasons (I
> haven't thought about the scheduling behavior). I didn't use it only
> because I felt it's a "logically larger" change than simply moving
> the pthread_barrier_wait call. And I wasn't aware that barriers are
> buggy in musl.

Finally following up on this. How do the attached commits look?

Rich

[-- Attachment #2: 0001-mq_notify-use-semaphore-instead-of-barrier-to-sync-a.patch --]
[-- Type: text/plain, Size: 1830 bytes --]

From 047594edd9c72c7b9b7e4f2b32d561a4b6555c2f Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:17:02 -0500
Subject: [PATCH 1/3] mq_notify: use semaphore instead of barrier to sync args
 consumption

semaphores are a much lighter primitive, and more idiomatic with
current usage in the code base.
---
 src/mq/mq_notify.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 221591c7..7d3dff5f 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -4,10 +4,11 @@
 #include <sys/socket.h>
 #include <signal.h>
 #include <unistd.h>
+#include <semaphore.h>
 #include "syscall.h"
 
 struct args {
-	pthread_barrier_t barrier;
+	sem_t sem;
 	int sock;
 	const struct sigevent *sev;
 };
@@ -21,7 +22,7 @@ static void *start(void *p)
 	void (*func)(union sigval) = args->sev->sigev_notify_function;
 	union sigval val = args->sev->sigev_value;
 
-	pthread_barrier_wait(&args->barrier);
+	sem_post(&args->sem);
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
 	close(s);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
@@ -48,7 +49,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
 	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
-	pthread_barrier_init(&args.barrier, 0, 2);
+	sem_init(&args.sem, 0, 0);
 
 	if (pthread_create(&td, &attr, start, &args)) {
 		__syscall(SYS_close, s);
@@ -56,8 +57,8 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 		return -1;
 	}
 
-	pthread_barrier_wait(&args.barrier);
-	pthread_barrier_destroy(&args.barrier);
+	sem_wait(&args.sem);
+	sem_destroy(&args.sem);
 
 	sev2.sigev_notify = SIGEV_THREAD;
 	sev2.sigev_signo = s;
-- 
2.21.0


[-- Attachment #3: 0002-mq_notify-fix-use-after-close-double-close-bug-in-er.patch --]
[-- Type: text/plain, Size: 1596 bytes --]

From 70dea58630dedb26b2fbf03422ad7a36f8e95c92 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:22:45 -0500
Subject: [PATCH 2/3] mq_notify: fix use-after-close/double-close bug in error
 path

in the error path where the mq_notify syscall fails, the initiating
thread may close the socket before the worker thread calls recv on it.
even if this does not happen, both threads perform the close
operation, resulting in double-close.

move the close to a cancellation cleanup handler so this cannot
happen. this fix is based on one of the alternate proposals in Alexey
Izbyshev's original report of the bug.
---
 src/mq/mq_notify.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 7d3dff5f..a42e6afc 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -13,6 +13,11 @@ struct args {
 	const struct sigevent *sev;
 };
 
+static void cleanup(void *p)
+{
+	__syscall(SYS_close, *(int *)p);
+}
+
 static void *start(void *p)
 {
 	struct args *args = p;
@@ -23,8 +28,9 @@ static void *start(void *p)
 	union sigval val = args->sev->sigev_value;
 
 	sem_post(&args->sem);
+	pthread_cleanup_push(cleanup, &s);
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
-	close(s);
+	pthread_cleanup_pop(1);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
 		func(val);
 	return 0;
@@ -66,7 +72,6 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (syscall(SYS_mq_notify, mqd, &sev2) < 0) {
 		pthread_cancel(td);
-		__syscall(SYS_close, s);
 		return -1;
 	}
 
-- 
2.21.0


[-- Attachment #4: 0003-mq_notify-join-worker-thread-before-returning-in-err.patch --]
[-- Type: text/plain, Size: 1116 bytes --]

From 1e9313fec1c666be2d2a3013aa0f34ed0e166c45 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:27:18 -0500
Subject: [PATCH 3/3] mq_notify: join worker thread before returning in error
 path

this avoids leaving behind transient resource consumption whose
cleanup is subject to scheduling behavior.
---
 src/mq/mq_notify.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index a42e6afc..31538880 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -54,7 +54,6 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
-	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
 	sem_init(&args.sem, 0, 0);
 
 	if (pthread_create(&td, &attr, start, &args)) {
@@ -72,8 +71,10 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (syscall(SYS_mq_notify, mqd, &sev2) < 0) {
 		pthread_cancel(td);
+		pthread_join(td, 0);
 		return -1;
 	}
 
+	pthread_detach(td);
 	return 0;
 }
-- 
2.21.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-10 16:29     ` Rich Felker
@ 2023-02-11 14:45       ` Alexey Izbyshev
  2023-02-11 14:52         ` Rich Felker
  2023-02-11 15:06         ` Rich Felker
  0 siblings, 2 replies; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-11 14:45 UTC (permalink / raw)
  To: musl

On 2023-02-10 19:29, Rich Felker wrote:
> On Wed, Dec 14, 2022 at 09:49:26AM +0300, Alexey Izbyshev wrote:
>> On 2022-12-14 05:26, Rich Felker wrote:
>> >On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
>> >>In case of failure mq_notify closes the socket immediately after
>> >>sending a cancellation request to the worker thread that is going to
>> >>call or have already called recv on that socket. Even if we don't
>> >>consider the kernel behavior when the only descriptor to an
>> >>object that
>> >>is being used in a system call is closed, if the socket descriptor is
>> >>closed before the kernel looks at it, another thread could open a
>> >>descriptor with the same value in the meantime, resulting in recv
>> >>acting on a wrong object.
>> >>
>> >>Fix the race by moving pthread_cancel call before the barrier wait to
>> >>guarantee that the cancellation flag is set before the worker thread
>> >>enters recv.
>> >>---
>> >>Other ways to fix this:
>> >>
>> >>* Remove the racing close call from mq_notify and surround recv
>> >>  with pthread_cleanup_push/pop.
>> >>
>> >>* Make the worker thread joinable initially, join it before closing
>> >>  the socket on the failure path, and detach it on the happy path.
>> >>  This would also require disabling cancellation around join/detach
>> >>  to ensure that mq_notify itself is not cancelled in an inappropriate
>> >>  state.
>> >
>> >I'd put this aside for a while because of the pthread barrier
>> >involvement I kinda didn't want to deal with. The fix you have sounds
>> >like it works, but I think I'd rather pursue one of the other
>> >approaches, probably the joinable thread one.
>> >
>> >At present, the implementation of barriers seems to be buggy (I need
>> >to dig back up the post about that), and they're also a really
>> >expensive synchronization tool that goes both directions where we
>> >really only need one direction (notifying the caller we're done
>> >consuming the args). I'd rather switch to a semaphore, which is the
>> >lightest and most idiomatic (at least per present-day musl idioms) way
>> >to do this.
>> >
>> This sounds good to me. The same approach can also be used in
>> timer_create (assuming it's acceptable to add dependency on
>> pthread_cancel to that code).
>> 
>> >Using a joinable thread also lets us ensure we don't leave around
>> >threads that are waiting to be scheduled just to exit on failure
>> >return. Depending on scheduling attributes, this probably could be
>> >bad.
>> >
>> I also prefer this approach, though mostly for aesthetic reasons (I
>> haven't thought about the scheduling behavior). I didn't use it only
>> because I felt it's a "logically larger" change than simply moving
>> the pthread_barrier_wait call. And I wasn't aware that barriers are
>> buggy in musl.
> 
> Finally following up on this. How do the attached commits look?
> 
The first and third patches add calls to sem_wait, pthread_join, and 
pthread_detach, which are cancellation points in musl, so cancellation 
needs to be disabled across those calls. I mentioned that in my initial 
mail.

Also, I wasn't sure if it's fine to just remove 
pthread_attr_setdetachstate call, and I found the following in POSIX[1]:

"The function shall be executed in an environment as if it were the 
start_routine for a newly created thread with thread attributes 
specified by sigev_notify_attributes. If sigev_notify_attributes is 
NULL, the behavior shall be as if the thread were created with the 
detachstate attribute set to PTHREAD_CREATE_DETACHED. Supplying an 
attributes structure with a detachstate attribute of 
PTHREAD_CREATE_JOINABLE results in undefined behavior."

This language seems to forbid calling sigev_notify_function in the 
context of a joinable thread. And even if musl wants to ignore this, 
PTHREAD_CREATE_JOINABLE must still be set manually if 
sigev_notify_attributes is not NULL.

Otherwise, the patches look good to me.

[1] 
https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_02

Alexey

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 14:45       ` Alexey Izbyshev
@ 2023-02-11 14:52         ` Rich Felker
  2023-02-11 15:13           ` Alexey Izbyshev
  2023-02-11 15:06         ` Rich Felker
  1 sibling, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-11 14:52 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

On Sat, Feb 11, 2023 at 05:45:14PM +0300, Alexey Izbyshev wrote:
> On 2023-02-10 19:29, Rich Felker wrote:
> >On Wed, Dec 14, 2022 at 09:49:26AM +0300, Alexey Izbyshev wrote:
> >>On 2022-12-14 05:26, Rich Felker wrote:
> >>>On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
> >>>>In case of failure mq_notify closes the socket immediately after
> >>>>sending a cancellation request to the worker thread that is going to
> >>>>call or have already called recv on that socket. Even if we don't
> >>>>consider the kernel behavior when the only descriptor to an
> >>>>object that
> >>>>is being used in a system call is closed, if the socket descriptor is
> >>>>closed before the kernel looks at it, another thread could open a
> >>>>descriptor with the same value in the meantime, resulting in recv
> >>>>acting on a wrong object.
> >>>>
> >>>>Fix the race by moving pthread_cancel call before the barrier wait to
> >>>>guarantee that the cancellation flag is set before the worker thread
> >>>>enters recv.
> >>>>---
> >>>>Other ways to fix this:
> >>>>
> >>>>* Remove the racing close call from mq_notify and surround recv
> >>>>  with pthread_cleanup_push/pop.
> >>>>
> >>>>* Make the worker thread joinable initially, join it before closing
> >>>>  the socket on the failure path, and detach it on the happy path.
> >>>>  This would also require disabling cancellation around join/detach
> >>>>  to ensure that mq_notify itself is not cancelled in an inappropriate
> >>>>  state.
> >>>
> >>>I'd put this aside for a while because of the pthread barrier
> >>>involvement I kinda didn't want to deal with. The fix you have sounds
> >>>like it works, but I think I'd rather pursue one of the other
> >>>approaches, probably the joinable thread one.
> >>>
> >>>At present, the implementation of barriers seems to be buggy (I need
> >>>to dig back up the post about that), and they're also a really
> >>>expensive synchronization tool that goes both directions where we
> >>>really only need one direction (notifying the caller we're done
> >>>consuming the args). I'd rather switch to a semaphore, which is the
> >>>lightest and most idiomatic (at least per present-day musl idioms) way
> >>>to do this.
> >>>
> >>This sounds good to me. The same approach can also be used in
> >>timer_create (assuming it's acceptable to add dependency on
> >>pthread_cancel to that code).
> >>
> >>>Using a joinable thread also lets us ensure we don't leave around
> >>>threads that are waiting to be scheduled just to exit on failure
> >>>return. Depending on scheduling attributes, this probably could be
> >>>bad.
> >>>
> >>I also prefer this approach, though mostly for aesthetic reasons (I
> >>haven't thought about the scheduling behavior). I didn't use it only
> >>because I felt it's a "logically larger" change than simply moving
> >>the pthread_barrier_wait call. And I wasn't aware that barriers are
> >>buggy in musl.
> >
> >Finally following up on this. How do the attached commits look?
> >
> The first and third patches add calls to sem_wait, pthread_join, and
> pthread_detach, which are cancellation points in musl, so
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Nice catch -- this is actually a bug. pthread_detach is not permitted
to be a cancellation point.

Rich

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 14:45       ` Alexey Izbyshev
  2023-02-11 14:52         ` Rich Felker
@ 2023-02-11 15:06         ` Rich Felker
  2023-02-11 17:13           ` Markus Wichmann
  1 sibling, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-11 15:06 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

[-- Attachment #1: Type: text/plain, Size: 4229 bytes --]

On Sat, Feb 11, 2023 at 05:45:14PM +0300, Alexey Izbyshev wrote:
> On 2023-02-10 19:29, Rich Felker wrote:
> >On Wed, Dec 14, 2022 at 09:49:26AM +0300, Alexey Izbyshev wrote:
> >>On 2022-12-14 05:26, Rich Felker wrote:
> >>>On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
> >>>>In case of failure mq_notify closes the socket immediately after
> >>>>sending a cancellation request to the worker thread that is going to
> >>>>call or have already called recv on that socket. Even if we don't
> >>>>consider the kernel behavior when the only descriptor to an
> >>>>object that
> >>>>is being used in a system call is closed, if the socket descriptor is
> >>>>closed before the kernel looks at it, another thread could open a
> >>>>descriptor with the same value in the meantime, resulting in recv
> >>>>acting on a wrong object.
> >>>>
> >>>>Fix the race by moving pthread_cancel call before the barrier wait to
> >>>>guarantee that the cancellation flag is set before the worker thread
> >>>>enters recv.
> >>>>---
> >>>>Other ways to fix this:
> >>>>
> >>>>* Remove the racing close call from mq_notify and surround recv
> >>>>  with pthread_cleanup_push/pop.
> >>>>
> >>>>* Make the worker thread joinable initially, join it before closing
> >>>>  the socket on the failure path, and detach it on the happy path.
> >>>>  This would also require disabling cancellation around join/detach
> >>>>  to ensure that mq_notify itself is not cancelled in an inappropriate
> >>>>  state.
> >>>
> >>>I'd put this aside for a while because of the pthread barrier
> >>>involvement I kinda didn't want to deal with. The fix you have sounds
> >>>like it works, but I think I'd rather pursue one of the other
> >>>approaches, probably the joinable thread one.
> >>>
> >>>At present, the implementation of barriers seems to be buggy (I need
> >>>to dig back up the post about that), and they're also a really
> >>>expensive synchronization tool that goes both directions where we
> >>>really only need one direction (notifying the caller we're done
> >>>consuming the args). I'd rather switch to a semaphore, which is the
> >>>lightest and most idiomatic (at least per present-day musl idioms) way
> >>>to do this.
> >>>
> >>This sounds good to me. The same approach can also be used in
> >>timer_create (assuming it's acceptable to add dependency on
> >>pthread_cancel to that code).
> >>
> >>>Using a joinable thread also lets us ensure we don't leave around
> >>>threads that are waiting to be scheduled just to exit on failure
> >>>return. Depending on scheduling attributes, this probably could be
> >>>bad.
> >>>
> >>I also prefer this approach, though mostly for aesthetic reasons (I
> >>haven't thought about the scheduling behavior). I didn't use it only
> >>because I felt it's a "logically larger" change than simply moving
> >>the pthread_barrier_wait call. And I wasn't aware that barriers are
> >>buggy in musl.
> >
> >Finally following up on this. How do the attached commits look?
> >
> The first and third patches add calls to sem_wait, pthread_join, and
> pthread_detach, which are cancellation points in musl, so
> cancellation needs to be disabled across those calls. I mentioned
> that in my initial mail.
> 
> Also, I wasn't sure if it's fine to just remove
> pthread_attr_setdetachstate call, and I found the following in
> POSIX[1]:
> 
> "The function shall be executed in an environment as if it were the
> start_routine for a newly created thread with thread attributes
> specified by sigev_notify_attributes. If sigev_notify_attributes is
> NULL, the behavior shall be as if the thread were created with the
> detachstate attribute set to PTHREAD_CREATE_DETACHED. Supplying an
> attributes structure with a detachstate attribute of
> PTHREAD_CREATE_JOINABLE results in undefined behavior."
> 
> This language seems to forbid calling sigev_notify_function in the
> context of a joinable thread. And even if musl wants to ignore this,
> PTHREAD_CREATE_JOINABLE must still be set manually if
> sigev_notify_attributes is not NULL.
> 
> Otherwise, the patches look good to me.
> 
> [1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_02

Updated patch series attached.

[-- Attachment #2: 0001-fix-pthred_detach-inadvertently-acting-as-cancellati.patch --]
[-- Type: text/plain, Size: 1331 bytes --]

From 382009435fd6bf61df8f7c94dd44ea0ddd42f749 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Sat, 11 Feb 2023 09:54:12 -0500
Subject: [PATCH 1/4] fix pthred_detach inadvertently acting as cancellation
 point in race case

disabling cancellation around the pthread_join call seems to be the
safest and logically simplest fix. i believe it would also be possible
to just perform the unmap directly here after __tl_sync, removing the
dependency on pthread_join, but such an approach duplicately encodes a
lot more implementation assumptions.
---
 src/thread/pthread_detach.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/thread/pthread_detach.c b/src/thread/pthread_detach.c
index 77772af2..ac9c1035 100644
--- a/src/thread/pthread_detach.c
+++ b/src/thread/pthread_detach.c
@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
 {
 	/* If the cas fails, detach state is either already-detached
 	 * or exiting/exited, and pthread_join will trap or cleanup. */
-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE)
+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE) {
+		int cs;
+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
 		return __pthread_join(t, 0);
+		__pthread_setcancelstate(cs, 0);
+	}
 	return 0;
 }
 
-- 
2.21.0


[-- Attachment #3: 0002-mq_notify-use-semaphore-instead-of-barrier-to-sync-a.patch --]
[-- Type: text/plain, Size: 2162 bytes --]

From 348bf7160dded85792fa24f4e310d2595dae7f8c Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:17:02 -0500
Subject: [PATCH 2/4] mq_notify: use semaphore instead of barrier to sync args
 consumption

semaphores are a much lighter primitive, and more idiomatic with
current usage in the code base.
---
 src/mq/mq_notify.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 221591c7..a42888d2 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -4,10 +4,11 @@
 #include <sys/socket.h>
 #include <signal.h>
 #include <unistd.h>
+#include <semaphore.h>
 #include "syscall.h"
 
 struct args {
-	pthread_barrier_t barrier;
+	sem_t sem;
 	int sock;
 	const struct sigevent *sev;
 };
@@ -21,7 +22,7 @@ static void *start(void *p)
 	void (*func)(union sigval) = args->sev->sigev_notify_function;
 	union sigval val = args->sev->sigev_value;
 
-	pthread_barrier_wait(&args->barrier);
+	sem_post(&args->sem);
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
 	close(s);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
@@ -37,6 +38,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	int s;
 	struct sigevent sev2;
 	static const char zeros[32];
+	int cs;
 
 	if (!sev || sev->sigev_notify != SIGEV_THREAD)
 		return syscall(SYS_mq_notify, mqd, sev);
@@ -48,7 +50,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
 	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
-	pthread_barrier_init(&args.barrier, 0, 2);
+	sem_init(&args.sem, 0, 0);
 
 	if (pthread_create(&td, &attr, start, &args)) {
 		__syscall(SYS_close, s);
@@ -56,8 +58,10 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 		return -1;
 	}
 
-	pthread_barrier_wait(&args.barrier);
-	pthread_barrier_destroy(&args.barrier);
+	pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
+	sem_wait(&args.sem);
+	pthread_setcancelstate(cs, 0);
+	sem_destroy(&args.sem);
 
 	sev2.sigev_notify = SIGEV_THREAD;
 	sev2.sigev_signo = s;
-- 
2.21.0


[-- Attachment #4: 0003-mq_notify-fix-use-after-close-double-close-bug-in-er.patch --]
[-- Type: text/plain, Size: 1596 bytes --]

From 560764d8d2ef5e531bb0fb99bcd092169abafbe0 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:22:45 -0500
Subject: [PATCH 3/4] mq_notify: fix use-after-close/double-close bug in error
 path

in the error path where the mq_notify syscall fails, the initiating
thread may close the socket before the worker thread calls recv on it.
even if this does not happen, both threads perform the close
operation, resulting in double-close.

move the close to a cancellation cleanup handler so this cannot
happen. this fix is based on one of the alternate proposals in Alexey
Izbyshev's original report of the bug.
---
 src/mq/mq_notify.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index a42888d2..ab718732 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -13,6 +13,11 @@ struct args {
 	const struct sigevent *sev;
 };
 
+static void cleanup(void *p)
+{
+	__syscall(SYS_close, *(int *)p);
+}
+
 static void *start(void *p)
 {
 	struct args *args = p;
@@ -23,8 +28,9 @@ static void *start(void *p)
 	union sigval val = args->sev->sigev_value;
 
 	sem_post(&args->sem);
+	pthread_cleanup_push(cleanup, &s);
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
-	close(s);
+	pthread_cleanup_pop(1);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
 		func(val);
 	return 0;
@@ -69,7 +75,6 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (syscall(SYS_mq_notify, mqd, &sev2) < 0) {
 		pthread_cancel(td);
-		__syscall(SYS_close, s);
 		return -1;
 	}
 
-- 
2.21.0


[-- Attachment #5: 0004-mq_notify-join-worker-thread-before-returning-in-err.patch --]
[-- Type: text/plain, Size: 1499 bytes --]

From e071fb9757ae71c3ec2c7b467e511171a906ae03 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:27:18 -0500
Subject: [PATCH 4/4] mq_notify: join worker thread before returning in error
 path

this avoids leaving behind transient resource consumption whose
cleanup is subject to scheduling behavior.
---
 src/mq/mq_notify.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index ab718732..7c6e3b5a 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -55,7 +55,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
-	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
+	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
 	sem_init(&args.sem, 0, 0);
 
 	if (pthread_create(&td, &attr, start, &args)) {
@@ -66,7 +66,6 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
 	sem_wait(&args.sem);
-	pthread_setcancelstate(cs, 0);
 	sem_destroy(&args.sem);
 
 	sev2.sigev_notify = SIGEV_THREAD;
@@ -75,8 +74,12 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (syscall(SYS_mq_notify, mqd, &sev2) < 0) {
 		pthread_cancel(td);
+		pthread_join(td, 0);
+		pthread_setcancelstate(cs, 0);
 		return -1;
 	}
 
+	pthread_setcancelstate(cs, 0);
+	pthread_detach(td);
 	return 0;
 }
-- 
2.21.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 14:52         ` Rich Felker
@ 2023-02-11 15:13           ` Alexey Izbyshev
  0 siblings, 0 replies; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-11 15:13 UTC (permalink / raw)
  To: musl

On 2023-02-11 17:52, Rich Felker wrote:
> On Sat, Feb 11, 2023 at 05:45:14PM +0300, Alexey Izbyshev wrote:
>> On 2023-02-10 19:29, Rich Felker wrote:
>> >On Wed, Dec 14, 2022 at 09:49:26AM +0300, Alexey Izbyshev wrote:
>> >>On 2022-12-14 05:26, Rich Felker wrote:
>> >>>On Wed, Nov 09, 2022 at 01:46:13PM +0300, Alexey Izbyshev wrote:
>> >>>>In case of failure mq_notify closes the socket immediately after
>> >>>>sending a cancellation request to the worker thread that is going to
>> >>>>call or have already called recv on that socket. Even if we don't
>> >>>>consider the kernel behavior when the only descriptor to an
>> >>>>object that
>> >>>>is being used in a system call is closed, if the socket descriptor is
>> >>>>closed before the kernel looks at it, another thread could open a
>> >>>>descriptor with the same value in the meantime, resulting in recv
>> >>>>acting on a wrong object.
>> >>>>
>> >>>>Fix the race by moving pthread_cancel call before the barrier wait to
>> >>>>guarantee that the cancellation flag is set before the worker thread
>> >>>>enters recv.
>> >>>>---
>> >>>>Other ways to fix this:
>> >>>>
>> >>>>* Remove the racing close call from mq_notify and surround recv
>> >>>>  with pthread_cleanup_push/pop.
>> >>>>
>> >>>>* Make the worker thread joinable initially, join it before closing
>> >>>>  the socket on the failure path, and detach it on the happy path.
>> >>>>  This would also require disabling cancellation around join/detach
>> >>>>  to ensure that mq_notify itself is not cancelled in an inappropriate
>> >>>>  state.
>> >>>
>> >>>I'd put this aside for a while because of the pthread barrier
>> >>>involvement I kinda didn't want to deal with. The fix you have sounds
>> >>>like it works, but I think I'd rather pursue one of the other
>> >>>approaches, probably the joinable thread one.
>> >>>
>> >>>At present, the implementation of barriers seems to be buggy (I need
>> >>>to dig back up the post about that), and they're also a really
>> >>>expensive synchronization tool that goes both directions where we
>> >>>really only need one direction (notifying the caller we're done
>> >>>consuming the args). I'd rather switch to a semaphore, which is the
>> >>>lightest and most idiomatic (at least per present-day musl idioms) way
>> >>>to do this.
>> >>>
>> >>This sounds good to me. The same approach can also be used in
>> >>timer_create (assuming it's acceptable to add dependency on
>> >>pthread_cancel to that code).
>> >>
>> >>>Using a joinable thread also lets us ensure we don't leave around
>> >>>threads that are waiting to be scheduled just to exit on failure
>> >>>return. Depending on scheduling attributes, this probably could be
>> >>>bad.
>> >>>
>> >>I also prefer this approach, though mostly for aesthetic reasons (I
>> >>haven't thought about the scheduling behavior). I didn't use it only
>> >>because I felt it's a "logically larger" change than simply moving
>> >>the pthread_barrier_wait call. And I wasn't aware that barriers are
>> >>buggy in musl.
>> >
>> >Finally following up on this. How do the attached commits look?
>> >
>> The first and third patches add calls to sem_wait, pthread_join, and
>> pthread_detach, which are cancellation points in musl, so
>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> Nice catch -- this is actually a bug. pthread_detach is not permitted
> to be a cancellation point.
> 
Indeed. I'd actually tried to check that before I sent the email, but 
got confused by the following sentence from "man 7 pthreads"[1]:

"An implementation may also mark other functions not specified in the 
standard as cancellation points."

My mistake is that I read this as "if a function is not specified to be 
a cancellation point in the standard, an implementation may still mark 
it as a cancellation point". But apparently it means that "if a function 
is not mentioned in the standard at all, an implementation may still 
mark it as a cancellation point".

To anyone wondering, the actual text from POSIX is[2]:

"In addition, a cancellation point may occur when a thread is executing 
any function that this standard does not require to be thread-safe but 
the implementation documents as being thread-safe. If a thread is 
cancelled while executing a non-thread-safe function, the behavior is 
undefined.

An implementation shall not introduce cancellation points into any other 
functions specified in this volume of POSIX.1-2017."

(And pthread_detach is required to be thread-safe).

[1] https://man7.org/linux/man-pages/man7/pthreads.7.html
[2] 
https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_09_05_02

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 15:06         ` Rich Felker
@ 2023-02-11 17:13           ` Markus Wichmann
  2023-02-11 17:46             ` Rich Felker
  2023-02-11 17:50             ` Alexey Izbyshev
  0 siblings, 2 replies; 21+ messages in thread
From: Markus Wichmann @ 2023-02-11 17:13 UTC (permalink / raw)
  To: musl; +Cc: Alexey Izbyshev

On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
> --- a/src/thread/pthread_detach.c
> +++ b/src/thread/pthread_detach.c
> @@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
>  {
>  	/* If the cas fails, detach state is either already-detached
>  	 * or exiting/exited, and pthread_join will trap or cleanup. */
> -	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE)
> +	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE) {
> +		int cs;
> +		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
>  		return __pthread_join(t, 0);
                ^^^^^^ I think you forgot to rework this.
> +		__pthread_setcancelstate(cs, 0);
> +	}
>  	return 0;
>  }
>

I see no other obvious missteps, though.

Ciao,
Markus

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 17:13           ` Markus Wichmann
@ 2023-02-11 17:46             ` Rich Felker
  2023-02-11 17:50             ` Alexey Izbyshev
  1 sibling, 0 replies; 21+ messages in thread
From: Rich Felker @ 2023-02-11 17:46 UTC (permalink / raw)
  To: Markus Wichmann; +Cc: musl, Alexey Izbyshev

On Sat, Feb 11, 2023 at 06:13:38PM +0100, Markus Wichmann wrote:
> On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
> > --- a/src/thread/pthread_detach.c
> > +++ b/src/thread/pthread_detach.c
> > @@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
> >  {
> >  	/* If the cas fails, detach state is either already-detached
> >  	 * or exiting/exited, and pthread_join will trap or cleanup. */
> > -	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE)
> > +	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE) {
> > +		int cs;
> > +		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
> >  		return __pthread_join(t, 0);
>                 ^^^^^^ I think you forgot to rework this.
> > +		__pthread_setcancelstate(cs, 0);
> > +	}
> >  	return 0;
> >  }
> >
> 
> I see no other obvious missteps, though.

Thanks. I'll just remove the return keyword. There's no possible
return value except 0; anything else would be UB (and we already trap
it where possible).

Rich

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 17:13           ` Markus Wichmann
  2023-02-11 17:46             ` Rich Felker
@ 2023-02-11 17:50             ` Alexey Izbyshev
  2023-02-11 17:59               ` Rich Felker
  1 sibling, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-11 17:50 UTC (permalink / raw)
  To: musl

On 2023-02-11 20:13, Markus Wichmann wrote:
> On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
>> --- a/src/thread/pthread_detach.c
>> +++ b/src/thread/pthread_detach.c
>> @@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
>>  {
>>  	/* If the cas fails, detach state is either already-detached
>>  	 * or exiting/exited, and pthread_join will trap or cleanup. */
>> -	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != 
>> DT_JOINABLE)
>> +	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != 
>> DT_JOINABLE) {
>> +		int cs;
>> +		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
>>  		return __pthread_join(t, 0);
>                 ^^^^^^ I think you forgot to rework this.
>> +		__pthread_setcancelstate(cs, 0);
>> +	}
>>  	return 0;
>>  }
>> 
> 
> I see no other obvious missteps, though.
> 
Same here, apart from this and misspelled "pthred_detach" in the commit 
message, the patches look good to me.

Regarding the POSIX requirement to run sigev_notify_function in the 
context of a detached thread, while it's possible to observe the wrong 
detachstate for a short while via pthread_getattr_np after these 
patches, I'm not sure there is a standard way to do that. Even if it 
exists, this minor issue may be not worth caring about.

Thanks,
Alexey

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 17:50             ` Alexey Izbyshev
@ 2023-02-11 17:59               ` Rich Felker
  2023-02-11 18:08                 ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-11 17:59 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
> On 2023-02-11 20:13, Markus Wichmann wrote:
> >On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
> >>--- a/src/thread/pthread_detach.c
> >>+++ b/src/thread/pthread_detach.c
> >>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
> >> {
> >> 	/* If the cas fails, detach state is either already-detached
> >> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
> >>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>DT_JOINABLE)
> >>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>DT_JOINABLE) {
> >>+		int cs;
> >>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
> >> 		return __pthread_join(t, 0);
> >                ^^^^^^ I think you forgot to rework this.
> >>+		__pthread_setcancelstate(cs, 0);
> >>+	}
> >> 	return 0;
> >> }
> >>
> >
> >I see no other obvious missteps, though.
> >
> Same here, apart from this and misspelled "pthred_detach" in the
> commit message, the patches look good to me.
> 
> Regarding the POSIX requirement to run sigev_notify_function in the
> context of a detached thread, while it's possible to observe the
> wrong detachstate for a short while via pthread_getattr_np after
> these patches, I'm not sure there is a standard way to do that. Even
> if it exists, this minor issue may be not worth caring about.

Would this just be if the notification callback executes before
mq_notify returns in the parent? I suppose we could have the newly
created thread do the work of making the syscall, handling the error
case, detaching itself on success and and reporting back to the
mq_notify function whether it succeeded or failed via the
semaphore/args structure. Thoughts on that?

Rich


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 17:59               ` Rich Felker
@ 2023-02-11 18:08                 ` Alexey Izbyshev
  2023-02-11 18:35                   ` Rich Felker
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-11 18:08 UTC (permalink / raw)
  To: musl

On 2023-02-11 20:59, Rich Felker wrote:
> On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
>> On 2023-02-11 20:13, Markus Wichmann wrote:
>> >On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
>> >>--- a/src/thread/pthread_detach.c
>> >>+++ b/src/thread/pthread_detach.c
>> >>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
>> >> {
>> >> 	/* If the cas fails, detach state is either already-detached
>> >> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
>> >>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
>> >>DT_JOINABLE)
>> >>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
>> >>DT_JOINABLE) {
>> >>+		int cs;
>> >>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
>> >> 		return __pthread_join(t, 0);
>> >                ^^^^^^ I think you forgot to rework this.
>> >>+		__pthread_setcancelstate(cs, 0);
>> >>+	}
>> >> 	return 0;
>> >> }
>> >>
>> >
>> >I see no other obvious missteps, though.
>> >
>> Same here, apart from this and misspelled "pthred_detach" in the
>> commit message, the patches look good to me.
>> 
>> Regarding the POSIX requirement to run sigev_notify_function in the
>> context of a detached thread, while it's possible to observe the
>> wrong detachstate for a short while via pthread_getattr_np after
>> these patches, I'm not sure there is a standard way to do that. Even
>> if it exists, this minor issue may be not worth caring about.
> 
> Would this just be if the notification callback executes before
> mq_notify returns in the parent?

Yes, it seems so.

> I suppose we could have the newly
> created thread do the work of making the syscall, handling the error
> case, detaching itself on success and and reporting back to the
> mq_notify function whether it succeeded or failed via the
> semaphore/args structure. Thoughts on that?
> 
Could we just move pthread_detach call to the worker thread to the point 
after pthread_cleanup_pop?

Alexey

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 18:08                 ` Alexey Izbyshev
@ 2023-02-11 18:35                   ` Rich Felker
  2023-02-11 19:28                     ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-11 18:35 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

On Sat, Feb 11, 2023 at 09:08:53PM +0300, Alexey Izbyshev wrote:
> On 2023-02-11 20:59, Rich Felker wrote:
> >On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
> >>On 2023-02-11 20:13, Markus Wichmann wrote:
> >>>On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
> >>>>--- a/src/thread/pthread_detach.c
> >>>>+++ b/src/thread/pthread_detach.c
> >>>>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
> >>>> {
> >>>> 	/* If the cas fails, detach state is either already-detached
> >>>> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
> >>>>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>>>DT_JOINABLE)
> >>>>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>>>DT_JOINABLE) {
> >>>>+		int cs;
> >>>>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
> >>>> 		return __pthread_join(t, 0);
> >>>                ^^^^^^ I think you forgot to rework this.
> >>>>+		__pthread_setcancelstate(cs, 0);
> >>>>+	}
> >>>> 	return 0;
> >>>> }
> >>>>
> >>>
> >>>I see no other obvious missteps, though.
> >>>
> >>Same here, apart from this and misspelled "pthred_detach" in the
> >>commit message, the patches look good to me.
> >>
> >>Regarding the POSIX requirement to run sigev_notify_function in the
> >>context of a detached thread, while it's possible to observe the
> >>wrong detachstate for a short while via pthread_getattr_np after
> >>these patches, I'm not sure there is a standard way to do that. Even
> >>if it exists, this minor issue may be not worth caring about.
> >
> >Would this just be if the notification callback executes before
> >mq_notify returns in the parent?
> 
> Yes, it seems so.
> 
> >I suppose we could have the newly
> >created thread do the work of making the syscall, handling the error
> >case, detaching itself on success and and reporting back to the
> >mq_notify function whether it succeeded or failed via the
> >semaphore/args structure. Thoughts on that?
> >
> Could we just move pthread_detach call to the worker thread to the
> point after pthread_cleanup_pop?

I thought that sounded dubious, in that it might lead to an attempt to
join a detached thread, but maybe it's safe to assume recv will never
return if the mq_notify syscall failed...?

Rich

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 18:35                   ` Rich Felker
@ 2023-02-11 19:28                     ` Alexey Izbyshev
  2023-02-11 19:49                       ` Rich Felker
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-11 19:28 UTC (permalink / raw)
  To: musl

On 2023-02-11 21:35, Rich Felker wrote:
> On Sat, Feb 11, 2023 at 09:08:53PM +0300, Alexey Izbyshev wrote:
>> On 2023-02-11 20:59, Rich Felker wrote:
>> >On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
>> >>On 2023-02-11 20:13, Markus Wichmann wrote:
>> >>>On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
>> >>>>--- a/src/thread/pthread_detach.c
>> >>>>+++ b/src/thread/pthread_detach.c
>> >>>>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
>> >>>> {
>> >>>> 	/* If the cas fails, detach state is either already-detached
>> >>>> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
>> >>>>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
>> >>>>DT_JOINABLE)
>> >>>>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
>> >>>>DT_JOINABLE) {
>> >>>>+		int cs;
>> >>>>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
>> >>>> 		return __pthread_join(t, 0);
>> >>>                ^^^^^^ I think you forgot to rework this.
>> >>>>+		__pthread_setcancelstate(cs, 0);
>> >>>>+	}
>> >>>> 	return 0;
>> >>>> }
>> >>>>
>> >>>
>> >>>I see no other obvious missteps, though.
>> >>>
>> >>Same here, apart from this and misspelled "pthred_detach" in the
>> >>commit message, the patches look good to me.
>> >>
>> >>Regarding the POSIX requirement to run sigev_notify_function in the
>> >>context of a detached thread, while it's possible to observe the
>> >>wrong detachstate for a short while via pthread_getattr_np after
>> >>these patches, I'm not sure there is a standard way to do that. Even
>> >>if it exists, this minor issue may be not worth caring about.
>> >
>> >Would this just be if the notification callback executes before
>> >mq_notify returns in the parent?
>> 
>> Yes, it seems so.
>> 
>> >I suppose we could have the newly
>> >created thread do the work of making the syscall, handling the error
>> >case, detaching itself on success and and reporting back to the
>> >mq_notify function whether it succeeded or failed via the
>> >semaphore/args structure. Thoughts on that?
>> >
>> Could we just move pthread_detach call to the worker thread to the
>> point after pthread_cleanup_pop?
> 
> I thought that sounded dubious, in that it might lead to an attempt to
> join a detached thread, but maybe it's safe to assume recv will never
> return if the mq_notify syscall failed...?
> 
Actually, because app signals are not blocked when the worker thread is 
created, recv can indeed return early with EINTR. But this looks like 
just a bug.

Otherwise, mq_notify already assumes that recv can't return before  
SYS_mq_notify (if it did, the syscall would try to register a closed 
fd). I haven't tried to prove it (e.g. maybe recv may need to allocate 
something before blocking and hence can fail with ENOMEM?), but if it's 
true, I don't see how a failed SYS_mq_notify could cause recv to return, 
so joining a detached thread should be impossible if we make 
pthread_detach follow recv.

Alexey

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 19:28                     ` Alexey Izbyshev
@ 2023-02-11 19:49                       ` Rich Felker
  2023-02-11 20:14                         ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-11 19:49 UTC (permalink / raw)
  To: Alexey Izbyshev; +Cc: musl

On Sat, Feb 11, 2023 at 10:28:20PM +0300, Alexey Izbyshev wrote:
> On 2023-02-11 21:35, Rich Felker wrote:
> >On Sat, Feb 11, 2023 at 09:08:53PM +0300, Alexey Izbyshev wrote:
> >>On 2023-02-11 20:59, Rich Felker wrote:
> >>>On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
> >>>>On 2023-02-11 20:13, Markus Wichmann wrote:
> >>>>>On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
> >>>>>>--- a/src/thread/pthread_detach.c
> >>>>>>+++ b/src/thread/pthread_detach.c
> >>>>>>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
> >>>>>> {
> >>>>>> 	/* If the cas fails, detach state is either already-detached
> >>>>>> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
> >>>>>>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>>>>>DT_JOINABLE)
> >>>>>>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>>>>>DT_JOINABLE) {
> >>>>>>+		int cs;
> >>>>>>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
> >>>>>> 		return __pthread_join(t, 0);
> >>>>>                ^^^^^^ I think you forgot to rework this.
> >>>>>>+		__pthread_setcancelstate(cs, 0);
> >>>>>>+	}
> >>>>>> 	return 0;
> >>>>>> }
> >>>>>>
> >>>>>
> >>>>>I see no other obvious missteps, though.
> >>>>>
> >>>>Same here, apart from this and misspelled "pthred_detach" in the
> >>>>commit message, the patches look good to me.
> >>>>
> >>>>Regarding the POSIX requirement to run sigev_notify_function in the
> >>>>context of a detached thread, while it's possible to observe the
> >>>>wrong detachstate for a short while via pthread_getattr_np after
> >>>>these patches, I'm not sure there is a standard way to do that. Even
> >>>>if it exists, this minor issue may be not worth caring about.
> >>>
> >>>Would this just be if the notification callback executes before
> >>>mq_notify returns in the parent?
> >>
> >>Yes, it seems so.
> >>
> >>>I suppose we could have the newly
> >>>created thread do the work of making the syscall, handling the error
> >>>case, detaching itself on success and and reporting back to the
> >>>mq_notify function whether it succeeded or failed via the
> >>>semaphore/args structure. Thoughts on that?
> >>>
> >>Could we just move pthread_detach call to the worker thread to the
> >>point after pthread_cleanup_pop?
> >
> >I thought that sounded dubious, in that it might lead to an attempt to
> >join a detached thread, but maybe it's safe to assume recv will never
> >return if the mq_notify syscall failed...?
> >
> Actually, because app signals are not blocked when the worker thread
> is created, recv can indeed return early with EINTR. But this looks
> like just a bug.

Yes. While it's not a conformance bug to run with signals unblocked
("The signal mask of this thread is implementation-defined.") it's a
functional bug to ever introduce threads that don't block all
application signals, since these interfere with sigwait & other
application control of where signals are delivered. This is an
oversight. I'll make it mask all signals.

> Otherwise, mq_notify already assumes that recv can't return before
> SYS_mq_notify (if it did, the syscall would try to register a closed
> fd). I haven't tried to prove it (e.g. maybe recv may need to
> allocate something before blocking and hence can fail with ENOMEM?),
> but if it's true, I don't see how a failed SYS_mq_notify could cause
> recv to return, so joining a detached thread should be impossible if
> we make pthread_detach follow recv.

I'm thinking for now maybe we should just drop the joining on error,
and leave it starting out detached. While recv should not fail, it's
obviously possible to make it fail in a seccomp sandbox, and you don't
want that to turn into UB inside the implementation. If it does fail,
the thread should still exit, but we have no way to synchronize with
the mq_notify parent to decide whether it's being joined or not in
this case without extra sync machinery...

Rich

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 19:49                       ` Rich Felker
@ 2023-02-11 20:14                         ` Alexey Izbyshev
  2023-02-12  0:32                           ` Rich Felker
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-11 20:14 UTC (permalink / raw)
  To: musl

On 2023-02-11 22:49, Rich Felker wrote:
> On Sat, Feb 11, 2023 at 10:28:20PM +0300, Alexey Izbyshev wrote:
>> On 2023-02-11 21:35, Rich Felker wrote:
>> >On Sat, Feb 11, 2023 at 09:08:53PM +0300, Alexey Izbyshev wrote:
>> >>On 2023-02-11 20:59, Rich Felker wrote:
>> >>>On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
>> >>>>On 2023-02-11 20:13, Markus Wichmann wrote:
>> >>>>>On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
>> >>>>>>--- a/src/thread/pthread_detach.c
>> >>>>>>+++ b/src/thread/pthread_detach.c
>> >>>>>>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
>> >>>>>> {
>> >>>>>> 	/* If the cas fails, detach state is either already-detached
>> >>>>>> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
>> >>>>>>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
>> >>>>>>DT_JOINABLE)
>> >>>>>>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
>> >>>>>>DT_JOINABLE) {
>> >>>>>>+		int cs;
>> >>>>>>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
>> >>>>>> 		return __pthread_join(t, 0);
>> >>>>>                ^^^^^^ I think you forgot to rework this.
>> >>>>>>+		__pthread_setcancelstate(cs, 0);
>> >>>>>>+	}
>> >>>>>> 	return 0;
>> >>>>>> }
>> >>>>>>
>> >>>>>
>> >>>>>I see no other obvious missteps, though.
>> >>>>>
>> >>>>Same here, apart from this and misspelled "pthred_detach" in the
>> >>>>commit message, the patches look good to me.
>> >>>>
>> >>>>Regarding the POSIX requirement to run sigev_notify_function in the
>> >>>>context of a detached thread, while it's possible to observe the
>> >>>>wrong detachstate for a short while via pthread_getattr_np after
>> >>>>these patches, I'm not sure there is a standard way to do that. Even
>> >>>>if it exists, this minor issue may be not worth caring about.
>> >>>
>> >>>Would this just be if the notification callback executes before
>> >>>mq_notify returns in the parent?
>> >>
>> >>Yes, it seems so.
>> >>
>> >>>I suppose we could have the newly
>> >>>created thread do the work of making the syscall, handling the error
>> >>>case, detaching itself on success and and reporting back to the
>> >>>mq_notify function whether it succeeded or failed via the
>> >>>semaphore/args structure. Thoughts on that?
>> >>>
>> >>Could we just move pthread_detach call to the worker thread to the
>> >>point after pthread_cleanup_pop?
>> >
>> >I thought that sounded dubious, in that it might lead to an attempt to
>> >join a detached thread, but maybe it's safe to assume recv will never
>> >return if the mq_notify syscall failed...?
>> >
>> Actually, because app signals are not blocked when the worker thread
>> is created, recv can indeed return early with EINTR. But this looks
>> like just a bug.
> 
> Yes. While it's not a conformance bug to run with signals unblocked
> ("The signal mask of this thread is implementation-defined.") it's a
> functional bug to ever introduce threads that don't block all
> application signals, since these interfere with sigwait & other
> application control of where signals are delivered. This is an
> oversight. I'll make it mask all signals.
> 
>> Otherwise, mq_notify already assumes that recv can't return before
>> SYS_mq_notify (if it did, the syscall would try to register a closed
>> fd). I haven't tried to prove it (e.g. maybe recv may need to
>> allocate something before blocking and hence can fail with ENOMEM?),
>> but if it's true, I don't see how a failed SYS_mq_notify could cause
>> recv to return, so joining a detached thread should be impossible if
>> we make pthread_detach follow recv.
> 
> I'm thinking for now maybe we should just drop the joining on error,
> and leave it starting out detached. While recv should not fail, it's
> obviously possible to make it fail in a seccomp sandbox, and you don't
> want that to turn into UB inside the implementation. If it does fail,
> the thread should still exit, but we have no way to synchronize with
> the mq_notify parent to decide whether it's being joined or not in
> this case without extra sync machinery...
> 
By dropping pthread_join we'd avoid introducing a new UB case if recv 
fails unexpectedly, but the existing case that I mentioned 
(SYS_mq_notify trying to register a closed fd) would remain. It seems to 
me that moving SYS_mq_notify into the worker thread as you suggested 
earlier is the cleanest option if we're worrying about recv.

Alexey

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-11 20:14                         ` Alexey Izbyshev
@ 2023-02-12  0:32                           ` Rich Felker
  2023-02-12 18:23                             ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Rich Felker @ 2023-02-12  0:32 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 4658 bytes --]

On Sat, Feb 11, 2023 at 11:14:33PM +0300, Alexey Izbyshev wrote:
> On 2023-02-11 22:49, Rich Felker wrote:
> >On Sat, Feb 11, 2023 at 10:28:20PM +0300, Alexey Izbyshev wrote:
> >>On 2023-02-11 21:35, Rich Felker wrote:
> >>>On Sat, Feb 11, 2023 at 09:08:53PM +0300, Alexey Izbyshev wrote:
> >>>>On 2023-02-11 20:59, Rich Felker wrote:
> >>>>>On Sat, Feb 11, 2023 at 08:50:15PM +0300, Alexey Izbyshev wrote:
> >>>>>>On 2023-02-11 20:13, Markus Wichmann wrote:
> >>>>>>>On Sat, Feb 11, 2023 at 10:06:03AM -0500, Rich Felker wrote:
> >>>>>>>>--- a/src/thread/pthread_detach.c
> >>>>>>>>+++ b/src/thread/pthread_detach.c
> >>>>>>>>@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
> >>>>>>>> {
> >>>>>>>> 	/* If the cas fails, detach state is either already-detached
> >>>>>>>> 	 * or exiting/exited, and pthread_join will trap or cleanup. */
> >>>>>>>>-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>>>>>>>DT_JOINABLE)
> >>>>>>>>+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) !=
> >>>>>>>>DT_JOINABLE) {
> >>>>>>>>+		int cs;
> >>>>>>>>+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
> >>>>>>>> 		return __pthread_join(t, 0);
> >>>>>>>                ^^^^^^ I think you forgot to rework this.
> >>>>>>>>+		__pthread_setcancelstate(cs, 0);
> >>>>>>>>+	}
> >>>>>>>> 	return 0;
> >>>>>>>> }
> >>>>>>>>
> >>>>>>>
> >>>>>>>I see no other obvious missteps, though.
> >>>>>>>
> >>>>>>Same here, apart from this and misspelled "pthred_detach" in the
> >>>>>>commit message, the patches look good to me.
> >>>>>>
> >>>>>>Regarding the POSIX requirement to run sigev_notify_function in the
> >>>>>>context of a detached thread, while it's possible to observe the
> >>>>>>wrong detachstate for a short while via pthread_getattr_np after
> >>>>>>these patches, I'm not sure there is a standard way to do that. Even
> >>>>>>if it exists, this minor issue may be not worth caring about.
> >>>>>
> >>>>>Would this just be if the notification callback executes before
> >>>>>mq_notify returns in the parent?
> >>>>
> >>>>Yes, it seems so.
> >>>>
> >>>>>I suppose we could have the newly
> >>>>>created thread do the work of making the syscall, handling the error
> >>>>>case, detaching itself on success and and reporting back to the
> >>>>>mq_notify function whether it succeeded or failed via the
> >>>>>semaphore/args structure. Thoughts on that?
> >>>>>
> >>>>Could we just move pthread_detach call to the worker thread to the
> >>>>point after pthread_cleanup_pop?
> >>>
> >>>I thought that sounded dubious, in that it might lead to an attempt to
> >>>join a detached thread, but maybe it's safe to assume recv will never
> >>>return if the mq_notify syscall failed...?
> >>>
> >>Actually, because app signals are not blocked when the worker thread
> >>is created, recv can indeed return early with EINTR. But this looks
> >>like just a bug.
> >
> >Yes. While it's not a conformance bug to run with signals unblocked
> >("The signal mask of this thread is implementation-defined.") it's a
> >functional bug to ever introduce threads that don't block all
> >application signals, since these interfere with sigwait & other
> >application control of where signals are delivered. This is an
> >oversight. I'll make it mask all signals.
> >
> >>Otherwise, mq_notify already assumes that recv can't return before
> >>SYS_mq_notify (if it did, the syscall would try to register a closed
> >>fd). I haven't tried to prove it (e.g. maybe recv may need to
> >>allocate something before blocking and hence can fail with ENOMEM?),
> >>but if it's true, I don't see how a failed SYS_mq_notify could cause
> >>recv to return, so joining a detached thread should be impossible if
> >>we make pthread_detach follow recv.
> >
> >I'm thinking for now maybe we should just drop the joining on error,
> >and leave it starting out detached. While recv should not fail, it's
> >obviously possible to make it fail in a seccomp sandbox, and you don't
> >want that to turn into UB inside the implementation. If it does fail,
> >the thread should still exit, but we have no way to synchronize with
> >the mq_notify parent to decide whether it's being joined or not in
> >this case without extra sync machinery...
> >
> By dropping pthread_join we'd avoid introducing a new UB case if
> recv fails unexpectedly, but the existing case that I mentioned
> (SYS_mq_notify trying to register a closed fd) would remain. It
> seems to me that moving SYS_mq_notify into the worker thread as you
> suggested earlier is the cleanest option if we're worrying about
> recv.

OK, I've done and reworked this series in a way that I think addresses
all the problems.

Rich

[-- Attachment #2: 0001-fix-pthread_detach-inadvertently-acting-as-cancellat.patch --]
[-- Type: text/plain, Size: 1360 bytes --]

From c3cd04fa5fecd2c349aefde090c602554ee4fa24 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Sat, 11 Feb 2023 09:54:12 -0500
Subject: [PATCH 1/5] fix pthread_detach inadvertently acting as cancellation
 point in race case

disabling cancellation around the pthread_join call seems to be the
safest and logically simplest fix. i believe it would also be possible
to just perform the unmap directly here after __tl_sync, removing the
dependency on pthread_join, but such an approach duplicately encodes a
lot more implementation assumptions.
---
 src/thread/pthread_detach.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/thread/pthread_detach.c b/src/thread/pthread_detach.c
index 77772af2..d73a500e 100644
--- a/src/thread/pthread_detach.c
+++ b/src/thread/pthread_detach.c
@@ -5,8 +5,12 @@ static int __pthread_detach(pthread_t t)
 {
 	/* If the cas fails, detach state is either already-detached
 	 * or exiting/exited, and pthread_join will trap or cleanup. */
-	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE)
-		return __pthread_join(t, 0);
+	if (a_cas(&t->detach_state, DT_JOINABLE, DT_DETACHED) != DT_JOINABLE) {
+		int cs;
+		__pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
+		__pthread_join(t, 0);
+		__pthread_setcancelstate(cs, 0);
+	}
 	return 0;
 }
 
-- 
2.21.0


[-- Attachment #3: 0002-mq_notify-use-semaphore-instead-of-barrier-to-sync-a.patch --]
[-- Type: text/plain, Size: 2162 bytes --]

From fde6891e59c315e4a0ec7e69182e1d6314e3795e Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:17:02 -0500
Subject: [PATCH 2/5] mq_notify: use semaphore instead of barrier to sync args
 consumption

semaphores are a much lighter primitive, and more idiomatic with
current usage in the code base.
---
 src/mq/mq_notify.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 221591c7..a42888d2 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -4,10 +4,11 @@
 #include <sys/socket.h>
 #include <signal.h>
 #include <unistd.h>
+#include <semaphore.h>
 #include "syscall.h"
 
 struct args {
-	pthread_barrier_t barrier;
+	sem_t sem;
 	int sock;
 	const struct sigevent *sev;
 };
@@ -21,7 +22,7 @@ static void *start(void *p)
 	void (*func)(union sigval) = args->sev->sigev_notify_function;
 	union sigval val = args->sev->sigev_value;
 
-	pthread_barrier_wait(&args->barrier);
+	sem_post(&args->sem);
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
 	close(s);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
@@ -37,6 +38,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	int s;
 	struct sigevent sev2;
 	static const char zeros[32];
+	int cs;
 
 	if (!sev || sev->sigev_notify != SIGEV_THREAD)
 		return syscall(SYS_mq_notify, mqd, sev);
@@ -48,7 +50,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
 	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
-	pthread_barrier_init(&args.barrier, 0, 2);
+	sem_init(&args.sem, 0, 0);
 
 	if (pthread_create(&td, &attr, start, &args)) {
 		__syscall(SYS_close, s);
@@ -56,8 +58,10 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 		return -1;
 	}
 
-	pthread_barrier_wait(&args.barrier);
-	pthread_barrier_destroy(&args.barrier);
+	pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
+	sem_wait(&args.sem);
+	pthread_setcancelstate(cs, 0);
+	sem_destroy(&args.sem);
 
 	sev2.sigev_notify = SIGEV_THREAD;
 	sev2.sigev_signo = s;
-- 
2.21.0


[-- Attachment #4: 0003-mq_notify-rework-to-fix-use-after-close-double-close.patch --]
[-- Type: text/plain, Size: 2743 bytes --]

From 1c75ded5ad35aaf3f4e1da316c50869c73903420 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 10 Feb 2023 11:22:45 -0500
Subject: [PATCH 3/5] mq_notify: rework to fix use-after-close/double-close
 bugs

in the error path where the mq_notify syscall fails, the initiating
thread may have closed the socket before the worker thread calls recv
on it. even in the absence of such a race, if the recv call failed,
e.g. due to seccomp policy blocking it, the worker thread could
proceed to close, producing a double-close condition.

this can all be simplified by moving the mq_notify syscall into the
new thread, so that the error case does not require pthread_cancel.
now, the initiating thread only needs to read back the error status
after waiting for the worker thread to consume its arguments.
---
 src/mq/mq_notify.c | 26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index a42888d2..8eac71ed 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -10,6 +10,8 @@
 struct args {
 	sem_t sem;
 	int sock;
+	mqd_t mqd;
+	int err;
 	const struct sigevent *sev;
 };
 
@@ -21,8 +23,21 @@ static void *start(void *p)
 	int s = args->sock;
 	void (*func)(union sigval) = args->sev->sigev_notify_function;
 	union sigval val = args->sev->sigev_value;
+	struct sigevent sev2;
+	static const char zeros[32];
+	int err = 0;
 
+	sev2.sigev_notify = SIGEV_THREAD;
+	sev2.sigev_signo = s;
+	sev2.sigev_value.sival_ptr = (void *)&zeros;
+
+	err = 0;
+	if (syscall(SYS_mq_notify, args->mqd, &sev2) < 0)
+		err = errno;
+	args->err = err;
 	sem_post(&args->sem);
+	if (err) return 0;
+
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
 	close(s);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
@@ -36,8 +51,6 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	pthread_attr_t attr;
 	pthread_t td;
 	int s;
-	struct sigevent sev2;
-	static const char zeros[32];
 	int cs;
 
 	if (!sev || sev->sigev_notify != SIGEV_THREAD)
@@ -46,6 +59,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	s = socket(AF_NETLINK, SOCK_RAW|SOCK_CLOEXEC, 0);
 	if (s < 0) return -1;
 	args.sock = s;
+	args.mqd = mqd;
 
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
@@ -63,13 +77,9 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	pthread_setcancelstate(cs, 0);
 	sem_destroy(&args.sem);
 
-	sev2.sigev_notify = SIGEV_THREAD;
-	sev2.sigev_signo = s;
-	sev2.sigev_value.sival_ptr = (void *)&zeros;
-
-	if (syscall(SYS_mq_notify, mqd, &sev2) < 0) {
-		pthread_cancel(td);
+	if (args.err) {
 		__syscall(SYS_close, s);
+		errno = args.err;
 		return -1;
 	}
 
-- 
2.21.0


[-- Attachment #5: 0004-mq_notify-join-worker-thread-before-returning-in-err.patch --]
[-- Type: text/plain, Size: 1612 bytes --]

From b6130c3da61e3d1bc61be88b0d3c8d1687089dfa Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Sat, 11 Feb 2023 19:09:53 -0500
Subject: [PATCH 4/5] mq_notify: join worker thread before returning in error
 path

this avoids leaving behind transient resource consumption whose
cleanup is subject to scheduling behavior.
---
 src/mq/mq_notify.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 8eac71ed..0ffee7d1 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -38,6 +38,7 @@ static void *start(void *p)
 	sem_post(&args->sem);
 	if (err) return 0;
 
+	pthread_detach(pthread_self());
 	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
 	close(s);
 	if (n==sizeof buf && buf[sizeof buf - 1] == 1)
@@ -63,7 +64,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	if (sev->sigev_notify_attributes) attr = *sev->sigev_notify_attributes;
 	else pthread_attr_init(&attr);
-	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
+	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
 	sem_init(&args.sem, 0, 0);
 
 	if (pthread_create(&td, &attr, start, &args)) {
@@ -74,14 +75,16 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 
 	pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
 	sem_wait(&args.sem);
-	pthread_setcancelstate(cs, 0);
 	sem_destroy(&args.sem);
 
 	if (args.err) {
 		__syscall(SYS_close, s);
+		pthread_join(td, 0);
+		pthread_setcancelstate(cs, 0);
 		errno = args.err;
 		return -1;
 	}
 
+	pthread_setcancelstate(cs, 0);
 	return 0;
 }
-- 
2.21.0


[-- Attachment #6: 0005-mq_notify-block-all-application-signals-in-the-worke.patch --]
[-- Type: text/plain, Size: 1867 bytes --]

From 0968c0c29252eb5fbca147fca901a53fb1b8ae04 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Sat, 11 Feb 2023 19:13:10 -0500
Subject: [PATCH 5/5] mq_notify: block all (application) signals in the worker
 thread

until the mq notification event arrives, it is mandatory that signals
be blocked. otherwise, a signal can be received, and its handler
executed, in a thread which does not yet exist on the abstract
machine.

after the point of the event arriving, having signals blocked is not a
conformance requirement but a QoI requirement. while the application
can unblock any signals it wants unblocked in the event handler
thread, if they did not start out blocked, it could not block them
without a race window where they are momentarily unblocked, and this
would preclude controlled delivery or other forms of acceptance
(sigwait, etc.) anywhere in the application.
---
 src/mq/mq_notify.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index 0ffee7d1..01ec3517 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -53,6 +53,7 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	pthread_t td;
 	int s;
 	int cs;
+	sigset_t allmask, origmask;
 
 	if (!sev || sev->sigev_notify != SIGEV_THREAD)
 		return syscall(SYS_mq_notify, mqd, sev);
@@ -67,11 +68,15 @@ int mq_notify(mqd_t mqd, const struct sigevent *sev)
 	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
 	sem_init(&args.sem, 0, 0);
 
+	sigfillset(&allmask);
+	pthread_sigmask(SIG_BLOCK, &allmask, &origmask);
 	if (pthread_create(&td, &attr, start, &args)) {
 		__syscall(SYS_close, s);
+		pthread_sigmask(SIG_SETMASK, &origmask, 0);
 		errno = EAGAIN;
 		return -1;
 	}
+	pthread_sigmask(SIG_SETMASK, &origmask, 0);
 
 	pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &cs);
 	sem_wait(&args.sem);
-- 
2.21.0


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-12  0:32                           ` Rich Felker
@ 2023-02-12 18:23                             ` Alexey Izbyshev
  2023-02-12 19:35                               ` Alexey Izbyshev
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-12 18:23 UTC (permalink / raw)
  To: musl

On 2023-02-12 03:32, Rich Felker wrote:
diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
index a42888d2..8eac71ed 100644
--- a/src/mq/mq_notify.c
+++ b/src/mq/mq_notify.c
@@ -10,6 +10,8 @@
  struct args {
  	sem_t sem;
  	int sock;
+	mqd_t mqd;
+	int err;
  	const struct sigevent *sev;
  };

@@ -21,8 +23,21 @@ static void *start(void *p)
  	int s = args->sock;
  	void (*func)(union sigval) = args->sev->sigev_notify_function;
  	union sigval val = args->sev->sigev_value;
+	struct sigevent sev2;
+	static const char zeros[32];
+	int err = 0;

+	sev2.sigev_notify = SIGEV_THREAD;
+	sev2.sigev_signo = s;
+	sev2.sigev_value.sival_ptr = (void *)&zeros;
+
+	err = 0;

This assignment is redundant.

Maybe this hunk could be simplified by getting rid of err and simply 
doing "args->err = -__syscall(SYS_mq_notify, args->mqd, &sev2)".

Except for this nit, the patches look good to me, thanks!

Alexey

+	if (syscall(SYS_mq_notify, args->mqd, &sev2) < 0)
+		err = errno;
+	args->err = err;
  	sem_post(&args->sem);
+	if (err) return 0;
+
  	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
  	close(s);
  	if (n==sizeof buf && buf[sizeof buf - 1] == 1)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-12 18:23                             ` Alexey Izbyshev
@ 2023-02-12 19:35                               ` Alexey Izbyshev
  2023-02-12 20:04                                 ` Rich Felker
  0 siblings, 1 reply; 21+ messages in thread
From: Alexey Izbyshev @ 2023-02-12 19:35 UTC (permalink / raw)
  To: musl

On 2023-02-12 21:23, Alexey Izbyshev wrote:
> On 2023-02-12 03:32, Rich Felker wrote:
> diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
> index a42888d2..8eac71ed 100644
> --- a/src/mq/mq_notify.c
> +++ b/src/mq/mq_notify.c
> @@ -10,6 +10,8 @@
>  struct args {
>  	sem_t sem;
>  	int sock;
> +	mqd_t mqd;
> +	int err;
>  	const struct sigevent *sev;
>  };
> 
> @@ -21,8 +23,21 @@ static void *start(void *p)
>  	int s = args->sock;
>  	void (*func)(union sigval) = args->sev->sigev_notify_function;
>  	union sigval val = args->sev->sigev_value;
> +	struct sigevent sev2;
> +	static const char zeros[32];
> +	int err = 0;
> 
> +	sev2.sigev_notify = SIGEV_THREAD;
> +	sev2.sigev_signo = s;
> +	sev2.sigev_value.sival_ptr = (void *)&zeros;
> +
> +	err = 0;
> 
> This assignment is redundant.
> 
> Maybe this hunk could be simplified by getting rid of err and simply
> doing "args->err = -__syscall(SYS_mq_notify, args->mqd, &sev2)".
> 
Never mind, err is needed because we can't access args->err after 
sem_post.

Alexey

> Except for this nit, the patches look good to me, thanks!
> 
> Alexey
> 
> +	if (syscall(SYS_mq_notify, args->mqd, &sev2) < 0)
> +		err = errno;
> +	args->err = err;
>  	sem_post(&args->sem);
> +	if (err) return 0;
> +
>  	n = recv(s, buf, sizeof(buf), MSG_NOSIGNAL|MSG_WAITALL);
>  	close(s);
>  	if (n==sizeof buf && buf[sizeof buf - 1] == 1)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [musl] [PATCH] mq_notify: fix close/recv race on failure path
  2023-02-12 19:35                               ` Alexey Izbyshev
@ 2023-02-12 20:04                                 ` Rich Felker
  0 siblings, 0 replies; 21+ messages in thread
From: Rich Felker @ 2023-02-12 20:04 UTC (permalink / raw)
  To: musl

On Sun, Feb 12, 2023 at 10:35:39PM +0300, Alexey Izbyshev wrote:
> On 2023-02-12 21:23, Alexey Izbyshev wrote:
> >On 2023-02-12 03:32, Rich Felker wrote:
> >diff --git a/src/mq/mq_notify.c b/src/mq/mq_notify.c
> >index a42888d2..8eac71ed 100644
> >--- a/src/mq/mq_notify.c
> >+++ b/src/mq/mq_notify.c
> >@@ -10,6 +10,8 @@
> > struct args {
> > 	sem_t sem;
> > 	int sock;
> >+	mqd_t mqd;
> >+	int err;
> > 	const struct sigevent *sev;
> > };
> >
> >@@ -21,8 +23,21 @@ static void *start(void *p)
> > 	int s = args->sock;
> > 	void (*func)(union sigval) = args->sev->sigev_notify_function;
> > 	union sigval val = args->sev->sigev_value;
> >+	struct sigevent sev2;
> >+	static const char zeros[32];
> >+	int err = 0;
> >
> >+	sev2.sigev_notify = SIGEV_THREAD;
> >+	sev2.sigev_signo = s;
> >+	sev2.sigev_value.sival_ptr = (void *)&zeros;
> >+
> >+	err = 0;
> >
> >This assignment is redundant.
> >
> >Maybe this hunk could be simplified by getting rid of err and simply
> >doing "args->err = -__syscall(SYS_mq_notify, args->mqd, &sev2)".
> >
> Never mind, err is needed because we can't access args->err after
> sem_post.

Yep. I'll still switch to __syscall and get rid of the redundant
initialization but err is needed since we can't access args later.

Rich

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-02-12 20:04 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-09 10:46 [musl] [PATCH] mq_notify: fix close/recv race on failure path Alexey Izbyshev
2022-12-14  2:26 ` Rich Felker
2022-12-14  6:49   ` Alexey Izbyshev
2023-02-10 16:29     ` Rich Felker
2023-02-11 14:45       ` Alexey Izbyshev
2023-02-11 14:52         ` Rich Felker
2023-02-11 15:13           ` Alexey Izbyshev
2023-02-11 15:06         ` Rich Felker
2023-02-11 17:13           ` Markus Wichmann
2023-02-11 17:46             ` Rich Felker
2023-02-11 17:50             ` Alexey Izbyshev
2023-02-11 17:59               ` Rich Felker
2023-02-11 18:08                 ` Alexey Izbyshev
2023-02-11 18:35                   ` Rich Felker
2023-02-11 19:28                     ` Alexey Izbyshev
2023-02-11 19:49                       ` Rich Felker
2023-02-11 20:14                         ` Alexey Izbyshev
2023-02-12  0:32                           ` Rich Felker
2023-02-12 18:23                             ` Alexey Izbyshev
2023-02-12 19:35                               ` Alexey Izbyshev
2023-02-12 20:04                                 ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).