From: Alexey Izbyshev <izbyshev@ispras.ru>
To: musl@lists.openwall.com
Subject: __synccall: deadlock and reliance on racy /proc/self/task
Date: Sun, 03 Feb 2019 00:40:39 +0300 [thread overview]
Message-ID: <1cc54dbe2e4832d804184f33cda0bdd1@ispras.ru> (raw)
[-- Attachment #1: Type: text/plain, Size: 4474 bytes --]
Hello!
I've discovered that setuid() deadlocks on a simple stress test
(attached: test-setuid.c) that creates threads concurrently with
setuid(). (Tested on 1.1.21 on x86_64, kernel 4.15.x and 4.4.x). The gdb
output:
(gdb) info thr
Id Target Id Frame
* 1 LWP 23555 "a.out" __synccall (func=func@entry=0x402a7d
<do_setxid>, ctx=ctx@entry=0x7fffea85b17c)
at ../../musl/src/thread/synccall.c:144
2 LWP 23566 "a.out" __syscall () at
../../musl/src/internal/x86_64/syscall.s:13
(gdb) bt
#0 __synccall (func=func@entry=0x402a7d <do_setxid>,
ctx=ctx@entry=0x7fffea85b17c) at ../../musl/src/thread/synccall.c:144
#1 0x0000000000402af9 in __setxid (nr=<optimized out>, id=<optimized
out>, eid=<optimized out>, sid=<optimized out>)
at ../../musl/src/unistd/setxid.c:33
#2 0x00000000004001c8 in main ()
(gdb) thr 2
(gdb) bt
#0 __syscall () at ../../musl/src/internal/x86_64/syscall.s:13
#1 0x00000000004046b7 in __timedwait_cp
(addr=addr@entry=0x7fe99023475c, val=val@entry=-1, clk=clk@entry=0,
at=at@entry=0x0,
priv=<optimized out>) at ../../musl/src/thread/__timedwait.c:31
#2 0x0000000000404591 in sem_timedwait (sem=sem@entry=0x7fe99023475c,
at=at@entry=0x0) at ../../musl/src/thread/sem_timedwait.c:23
#3 0x00000000004044e1 in sem_wait (sem=sem@entry=0x7fe99023475c) at
../../musl/src/thread/sem_wait.c:5
#4 0x00000000004037ae in handler (sig=<optimized out>) at
../../musl/src/thread/synccall.c:43
#5 <signal handler called>
#6 __clone () at ../../musl/src/thread/x86_64/clone.s:17
#7 0x00000000004028ec in __pthread_create (res=0x7fe990234eb8,
attrp=0x606260 <attr>, entry=0x400300 <thr>, arg=0x0)
at ../../musl/src/thread/pthread_create.c:286
#8 0x0000000000400323 in thr ()
The main thread spins in __synccall with futex() always returning
ETIMEDOUT (line 139) and "head" is NULL, while handler() in the second
thread is blocked on sem_wait() (line 40). So it looks like handler()
updated the linked list, but the main thread doesn't see the update.
For some reason __synccall accesses the list without a barrier (line
120), though I don't see why one wouldn't be necessary for correct
observability of head->next. However, I'm testing on x86_64, so
acquire/release semantics works without barriers.
I thought that a possible explanation is that handler() got blocked in a
*previous* setuid() call, but we didn't notice its list entry at that
time and then overwrote "head" with NULL on the current call to
setuid(). This seems to be possible because of the following.
1) There is a "presignalling" phase, where we may send a signal to *any*
thread. Moreover, the number of signals we sent may be *more* than the
number of threads because some threads may exit while we're in the loop.
As a result, SIGSYNCCALL may be pending after this loop.
/* Initially send one signal per counted thread. But since we can't
* synchronize with thread creation/exit here, there could be too
* few signals. This initial signaling is just an optimization, not
* part of the logic. */
for (i=libc.threads_minus_1; i; i--)
__syscall(SYS_kill, pid, SIGSYNCCALL);
2) __synccall relies on /proc/self/task to get the list of *all*
threads. However, since new threads can be created concurrently while we
read /proc (if some threads were in pthread_thread() after
__block_new_threads check when we set it to 1), I thought that /proc may
miss some threads (that's actually why I started the whole exercise in
the first place).
So, if we miss a thread in (2) but it's created and signalled with a
pending SIGSYNCCALL shortly after we exit /proc loop (but before we
reset the signal handler), "handler()" will run in that thread
concurrently with us, and we may miss its list entry if the timing is
right.
I've checked that if I remove the "presignalling" loop, the deadlock
disappears (at least, I could run the test for several minutes without
any problem).
Of course, the larger problem remains: if we may miss some threads
because of /proc, we may fail to call setuid() syscall in those threads.
And that's indeed easily happens in my second test (attached:
test-setuid-mismatch.c; expected to be run as a suid binary; note that I
tested both with and without "presignalling").
Both tests run on glibc (2.27) without any problem. Would it be possible
to fix __synccall in musl? Thanks!
(Please CC me on answering, I'm not subscribed to the list).
Alexey
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: test-setuid.c --]
[-- Type: text/x-c; name=test-setuid.c, Size: 656 bytes --]
#define _GNU_SOURCE
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/syscall.h>
#include <unistd.h>
#define NUM_THREADS 1
static pthread_attr_t attr;
void *thr(void *arg) {
if (pthread_create(&(pthread_t){0}, &attr, thr, 0))
abort();
return 0;
}
int main(void) {
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
for (int i = 0; i < NUM_THREADS; i++)
if (pthread_create(&(pthread_t){0}, &attr, thr, 0))
abort();
int euid = geteuid();
for (int it = 0;; it++) {
printf("%d\n", it);
if (seteuid(euid)) {
perror("seteuid");
abort();
}
}
}
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #3: test-setuid-mismatch.c --]
[-- Type: text/x-c; name=test-setuid-mismatch.c, Size: 1162 bytes --]
#define _GNU_SOURCE
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/syscall.h>
#include <unistd.h>
#define NUM_THREADS 1
static pthread_attr_t attr;
static volatile int euid = -1, gen, num;
void *thr(void *arg) {
while (euid == -1)
;
int cur_gen = gen;
int teuid = syscall(SYS_geteuid);
if (teuid != euid) {
printf("mismatch: %d != %d\n", teuid, euid);
sleep(1000000);
}
__sync_fetch_and_add(&num, 1);
while (cur_gen == gen)
;
if (pthread_create(&(pthread_t){0}, &attr, thr, 0))
abort();
return 0;
}
int main(void) {
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
for (int i = 0; i < NUM_THREADS; i++)
if (pthread_create(&(pthread_t){0}, &attr, thr, 0))
abort();
printf("uid = %d euid = %d\n", getuid(), geteuid());
int ruid = getuid();
int cur_euid = 0;
for (int it = 0;; it++) {
printf("%d\n", it);
cur_euid = cur_euid ? 0 : ruid;
if (seteuid(cur_euid)) {
perror("seteuid");
abort();
}
num = 0;
euid = cur_euid;
while (num < NUM_THREADS)
;
euid = -1;
gen = 1 - gen;
}
}
next reply other threads:[~2019-02-02 21:40 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-02 21:40 Alexey Izbyshev [this message]
2019-02-07 18:36 ` Rich Felker
2019-02-08 18:14 ` Alexey Izbyshev
2019-02-08 18:33 ` Rich Felker
2019-02-09 16:21 ` Szabolcs Nagy
2019-02-09 18:33 ` Alexey Izbyshev
2019-02-09 21:40 ` Szabolcs Nagy
2019-02-09 22:29 ` Alexey Izbyshev
2019-02-10 0:52 ` Rich Felker
2019-02-10 1:16 ` Szabolcs Nagy
2019-02-10 1:20 ` Rich Felker
2019-02-10 4:01 ` Rich Felker
2019-02-10 12:32 ` Szabolcs Nagy
2019-02-10 15:05 ` Rich Felker
2019-02-10 12:15 ` Alexey Izbyshev
2019-02-10 14:57 ` Rich Felker
2019-02-10 21:04 ` Alexey Izbyshev
2019-02-12 18:48 ` Rich Felker
2019-02-21 0:41 ` Rich Felker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1cc54dbe2e4832d804184f33cda0bdd1@ispras.ru \
--to=izbyshev@ispras.ru \
--cc=musl@lists.openwall.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/musl/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).