From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on inbox.vuxu.org X-Spam-Level: X-Spam-Status: No, score=-3.1 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL autolearn=ham autolearn_force=no version=3.4.4 Received: (qmail 8826 invoked from network); 31 Oct 2023 16:21:19 -0000 Received: from second.openwall.net (193.110.157.125) by inbox.vuxu.org with ESMTPUTF8; 31 Oct 2023 16:21:19 -0000 Received: (qmail 19875 invoked by uid 550); 31 Oct 2023 16:21:15 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: musl@lists.openwall.com Received: (qmail 19840 invoked from network); 31 Oct 2023 16:21:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=s31663417; t=1698769262; x=1699374062; i=nullplan@gmx.net; bh=U/vvzdSylOlgKa4Z6OqaSYVSqKbgL8b9TCBYhuFq90c=; h=X-UI-Sender-Class:Date:From:To:Subject; b=tDNbk/VIblQhExRiM8eU6X29nNPHowdeQOeggEPGxWgoCAnsAYWKoTz1ZvONjVHl 0l8j69UT6mygL/+BKN53Xr3BDTIuogpXeQe9+lYEbvbNwt3ZnplMy25JTUoWOjNNA hmmhPIRfizAhzMxdwE08ILuNAMjTztQ5i/NKstgAlXvdDv7J59UP5hW6xfq/+80HK gR+DCF8uUuX2ozc6bJCd8rSnjWOUnum9qSOLIKszBeFSz7eg1CNYrW4TlbwSEq88D 54aWCBUsS4/VMIH3GBXIomqofnCaNHBmxm2QK4GWNx9NQMiqJXtxd7Z98mrUK76Uw waYOZOj3E0fH1c5ULg== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Date: Tue, 31 Oct 2023 17:21:01 +0100 From: Markus Wichmann To: musl@lists.openwall.com Message-ID: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="cqEDKSHmFa7KumqT" Content-Disposition: inline X-Provags-ID: V03:K1:hdjOkKIo5Yeib2Zx/70Mvv+Aeuu4QAIMwBWULkL0lx5xc00igrM EyZSzKulGPN+YCss7qWP8wfd/VXRW0CmzpR8VsXbrJHylbzekJcm1ra9rbQAxxAIRRtq26Z FLg7fwV/Da4MXqRyY83NT3ujdvEZ6I34B4Pm9LTGsRcjkFPaEQCpLn5vCHxk58+ngiJgQAb qz6o77ipX/CP4PWOAQrVg== UI-OutboundReport: notjunk:1;M01:P0:ljY4Y7nMJrY=;WpZ5mXYGL8zqb9CHONKCQOtgTxb Yql8dAEtadx8Ap9uZP3Pt4yVQL7vkf+4l0mP3rCa4g91Z381ow48s//OaomRD9PRlmlVHRczm MF30hP9MhOb+QRr606R0wp7uc4QOKW6WeIynBJh4MDdeowDX/zwytvL+qWI1RRKADoGRCOcsx ncQDr5qC7dkcqutLXLxKHXe82k52r+FgTlqC1rdcvpa2sUem6FzFBsiTQ4A+O4aEa/dHWJfRf 4iVmySPxkigqSXdwi5snZEmPMKV3iBchQqqXF27VFVMih2h5L+BQ4tbkLdj2n8gjs8lskBPhZ JoAd993RcmZjAioR2WpDh5bwbXWx2i2zjx5klg9Er3XNQOm2indJMLkOTCzy3gwjLh3KHSBH7 pNfVDoE+BAww7AVSNg9J9NUOUBEVLJHZe/bG1W+lXFkruIf/Uc6npwR46CHaltn95YcasWwCg IwxsdMRgf1VxE8UUtx2hCom279mpnK0i6SC1eaiIUyioBIo+L2Tgj9G82zT0CSBwAJwzg/gZ7 /8xsgkMGNkxObzVIyglZ3DEIRAioqtOGvUTQ0zUsL4PEMRrXpY97xinB3WbpXD+bmVo3/3rbW eC31tEcwWvMMB61gOVcpioPAsV4bcb+nQIDyOS3/ddsozHSd0vMZVi3Y0XUFEh/Cy9fWmnYjp 2shPatv+FGjdg3oPQM5pOONTLeh2kY1FaQdMVGYWNRbkOKqcCMnInucJ6IAI3KqD9EFoZ/E/T 6UojItTfA1U2UgCEWoKzOD49s9B2SesB2pIu4r9kp06DnMoK2+F+9dTkW2NuuYS0Zn56mBNij nRoqx6aOZ/uBMRfjhqUfO1XDC+Jzpv+12500tUK3PhoBfglpoogqKymcYHRnW8cSTJ4iaTj8H 9MOapWJVkOuNmyh3Ji8cKr194i6yLUO18WVrkreZNG9e4jVd0fjZ4QJRZlUk4bIMO1EhGx6Hn 9NwD3A== Subject: [musl] synccall patches --cqEDKSHmFa7KumqT Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi all, fittingly for Reformation Day, I chose to actually send in some patches today, rather than provide comments. And fittingly for Halloween, they concern a scary part of musl, namely __synccall(). First I noticed that the initial return value set in __setxid() is not a valid return value for the functions that call it. Those are only allowed to return 0 or -1, not 1. That is easily fixed. Then I noticed that __synccall() has a logic bug explained in the commit message. It wouldn't be a bug if semaphores had any kind of preference for the thread that has waited longest or something, but musl does not implement something like this. And the kernel explicitly documents that FUTEX_WAKE will wake a random thread. At first I thought the best solution would be to forego the serialized execution of the callback; just release all threads in line 97 (and then fix the callbacks to cope with parallel execution). But BSD semaphores have no way to do that, and a mass-release loop like the one on line 110 would have the same issue as given here. It would only be less likely to happen, with the likelihood rising with the number of threads. So a new semaphore it is. Ciao, Markus --cqEDKSHmFa7KumqT Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="0001-Initialize-setxid-retval-to-valid-value.patch" Content-Transfer-Encoding: quoted-printable =46rom 44e97fb238670e83eff31c3894cc8a024cef051c Mon Sep 17 00:00:00 2001 From: Markus Wichmann Date: Tue, 31 Oct 2023 16:59:47 +0100 Subject: [PATCH 1/2] Initialize setxid retval to valid value. If __synccall() fails to capture all threads because tkill fails for some reason other than EAGAIN, then the callback given will never be executed, so nothing will ever overwrite the initial value. So that is the value that will be returned from the function. The previous setting of 1 is not a valid value for setuid() et al. to return. I chose -EAGAIN since I don't know the reason the synccall failed ahead of time, but EAGAIN is a specified error code for a possibly temporary failure in setuid(). =2D-- src/unistd/setxid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/unistd/setxid.c b/src/unistd/setxid.c index 487c1a16..0a2d9d12 100644 =2D-- a/src/unistd/setxid.c +++ b/src/unistd/setxid.c @@ -28,7 +28,7 @@ int __setxid(int nr, int id, int eid, int sid) { /* ret is initially nonzero so that failure of the first thread does not * trigger the safety kill above. */ - struct ctx c =3D { .nr =3D nr, .id =3D id, .eid =3D eid, .sid =3D sid, .= ret =3D 1 }; + struct ctx c =3D { .nr =3D nr, .id =3D id, .eid =3D eid, .sid =3D sid, .= ret =3D -EAGAIN }; __synccall(do_setxid, &c); return __syscall_ret(c.ret); } =2D- 2.39.2 --cqEDKSHmFa7KumqT Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="0002-synccall-Add-exit_sem.patch" Content-Transfer-Encoding: quoted-printable =46rom f0793c5565b0670fa5283edaaadf8fbdb7f197be Mon Sep 17 00:00:00 2001 From: Markus Wichmann Date: Tue, 31 Oct 2023 17:03:44 +0100 Subject: [PATCH 2/2] synccall: Add exit_sem. The code intends for the sem_post() in line 97 (now 98) to only unblock target threads waiting on line 29. But after the first thread is released, the next sem_post() might also unblock a thread waiting on line 36. That would cause the thread to return to the execution of user code before all threads are done, leading to user code being executed in a mixed-credentials environment. What's more, if this happens more than once, then the mass release on line 110 (now line 111) will cause multiple threads to execute the callback at the same time, and the callbacks are currently not written to cope with that situation. Adding another semaphore allows the caller to say explicitly which threads it wants to release. =2D-- src/thread/synccall.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/src/thread/synccall.c b/src/thread/synccall.c index a6b177c0..38597254 100644 =2D-- a/src/thread/synccall.c +++ b/src/thread/synccall.c @@ -11,7 +11,7 @@ weak_alias(dummy_0, __tl_unlock); static int target_tid; static void (*callback)(void *), *context; -static sem_t target_sem, caller_sem; +static sem_t target_sem, caller_sem, exit_sem; static void dummy(void *p) { @@ -33,7 +33,7 @@ static void handler(int sig) /* Inform caller we've complered the callback and wait * for the caller to release us to return. */ sem_post(&caller_sem); - sem_wait(&target_sem); + sem_wait(&exit_sem); /* Inform caller we are returning and state is destroyable. */ sem_post(&caller_sem); @@ -62,6 +62,7 @@ void __synccall(void (*func)(void *), void *ctx) sem_init(&target_sem, 0, 0); sem_init(&caller_sem, 0, 0); + sem_init(&exit_sem, 0, 0); if (!libc.threads_minus_1 || __syscall(SYS_gettid) !=3D self->tid) goto single_threaded; @@ -107,12 +108,13 @@ single_threaded: /* Only release the caught threads once all threads, including the * caller, have returned from the callback function. */ for (i=3D0; i