From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/13723 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: __synccall: deadlock and reliance on racy /proc/self/task Date: Thu, 7 Feb 2019 13:36:26 -0500 Message-ID: <20190207183626.GQ23599@brightrain.aerifal.cx> References: <1cc54dbe2e4832d804184f33cda0bdd1@ispras.ru> Reply-To: musl@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="244129"; mail-complaints-to="usenet@blaine.gmane.org" User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Alexey Izbyshev To: musl@lists.openwall.com Original-X-From: musl-return-13739-gllmg-musl=m.gmane.org@lists.openwall.com Thu Feb 07 19:36:43 2019 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1groXS-0011Lj-9O for gllmg-musl@m.gmane.org; Thu, 07 Feb 2019 19:36:42 +0100 Original-Received: (qmail 23806 invoked by uid 550); 7 Feb 2019 18:36:39 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 23777 invoked from network); 7 Feb 2019 18:36:38 -0000 Content-Disposition: inline In-Reply-To: <1cc54dbe2e4832d804184f33cda0bdd1@ispras.ru> Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:13723 Archived-At: On Sun, Feb 03, 2019 at 12:40:39AM +0300, Alexey Izbyshev wrote: > Hello! > > I've discovered that setuid() deadlocks on a simple stress test > (attached: test-setuid.c) that creates threads concurrently with > setuid(). (Tested on 1.1.21 on x86_64, kernel 4.15.x and 4.4.x). The > gdb output: > > (gdb) info thr > Id Target Id Frame > * 1 LWP 23555 "a.out" __synccall (func=func@entry=0x402a7d > , ctx=ctx@entry=0x7fffea85b17c) > at ../../musl/src/thread/synccall.c:144 > 2 LWP 23566 "a.out" __syscall () at > .../../musl/src/internal/x86_64/syscall.s:13 > (gdb) bt > #0 __synccall (func=func@entry=0x402a7d , > ctx=ctx@entry=0x7fffea85b17c) at > ../../musl/src/thread/synccall.c:144 > #1 0x0000000000402af9 in __setxid (nr=, > id=, eid=, sid=) > at ../../musl/src/unistd/setxid.c:33 > #2 0x00000000004001c8 in main () > (gdb) thr 2 > (gdb) bt > #0 __syscall () at ../../musl/src/internal/x86_64/syscall.s:13 > #1 0x00000000004046b7 in __timedwait_cp > (addr=addr@entry=0x7fe99023475c, val=val@entry=-1, clk=clk@entry=0, > at=at@entry=0x0, > priv=) at ../../musl/src/thread/__timedwait.c:31 > #2 0x0000000000404591 in sem_timedwait > (sem=sem@entry=0x7fe99023475c, at=at@entry=0x0) at > ../../musl/src/thread/sem_timedwait.c:23 > #3 0x00000000004044e1 in sem_wait (sem=sem@entry=0x7fe99023475c) at > .../../musl/src/thread/sem_wait.c:5 > #4 0x00000000004037ae in handler (sig=) at > .../../musl/src/thread/synccall.c:43 > #5 > #6 __clone () at ../../musl/src/thread/x86_64/clone.s:17 > #7 0x00000000004028ec in __pthread_create (res=0x7fe990234eb8, > attrp=0x606260 , entry=0x400300 , arg=0x0) > at ../../musl/src/thread/pthread_create.c:286 > #8 0x0000000000400323 in thr () > > The main thread spins in __synccall with futex() always returning > ETIMEDOUT (line 139) and "head" is NULL, while handler() in the > second thread is blocked on sem_wait() (line 40). So it looks like > handler() updated the linked list, but the main thread doesn't see > the update. I don't understand how this state is reachable. Even if it were reached once by failing to see the update to head, the update should eventually be seen when retrying after timeout. > For some reason __synccall accesses the list without a barrier (line > 120), though I don't see why one wouldn't be necessary for correct > observability of head->next. However, I'm testing on x86_64, so > acquire/release semantics works without barriers. The formal intent in musl is that all a_* are full seq_cst barriers. On x86[_64] this used to not be the case; we just used a normal store, but that turned out to be broken because in some places (and apparently here in __synccall) there was code that depended on a_store having acquire semantics too. See commit 3c43c0761e1725fd5f89a9c028cbf43250abb913 and 5a9c8c05a5a0cdced4122589184fd795b761bb4a. If not for this fix, I could see this being related (but again, it should see it after timeout anyway). But since the barrier is there now, it shouldn't happen. > I thought that a possible explanation is that handler() got blocked > in a *previous* setuid() call, but we didn't notice its list entry > at that time and then overwrote "head" with NULL on the current call > to setuid(). This seems to be possible because of the following. > > 1) There is a "presignalling" phase, where we may send a signal to > *any* thread. Moreover, the number of signals we sent may be *more* > than the number of threads because some threads may exit while we're > in the loop. As a result, SIGSYNCCALL may be pending after this > loop. > > /* Initially send one signal per counted thread. But since we can't > * synchronize with thread creation/exit here, there could be too > * few signals. This initial signaling is just an optimization, not > * part of the logic. */ > for (i=libc.threads_minus_1; i; i--) > __syscall(SYS_kill, pid, SIGSYNCCALL); > > 2) __synccall relies on /proc/self/task to get the list of *all* > threads. However, since new threads can be created concurrently > while we read /proc (if some threads were in pthread_thread() after > __block_new_threads check when we set it to 1), I thought that /proc > may miss some threads (that's actually why I started the whole > exercise in the first place). > > So, if we miss a thread in (2) but it's created and signalled with a > pending SIGSYNCCALL shortly after we exit /proc loop (but before we > reset the signal handler), "handler()" will run in that thread > concurrently with us, and we may miss its list entry if the timing > is right. This seems plausible. > I've checked that if I remove the "presignalling" loop, the deadlock > disappears (at least, I could run the test for several minutes > without any problem). > > Of course, the larger problem remains: if we may miss some threads > because of /proc, we may fail to call setuid() syscall in those > threads. And that's indeed easily happens in my second test > (attached: test-setuid-mismatch.c; expected to be run as a suid > binary; note that I tested both with and without "presignalling"). Does it work if we force two iterations of the readdir loop with no tasks missed, rather than just one, to catch the case of missed concurrent additions? I'm not sure. But all this makes me really uncomfortable with the current approach. > Both tests run on glibc (2.27) without any problem. I think glibc has a different problem: there's a window at thread exit where setxid can return success without having run the id change in the exiting thread. In this case, assuming an attacker has code execution in the process after dropping root, they can mmap malicious code over top of the thread exit code and obtain code execution as root. > Would it be > possible to fix __synccall in musl? Thanks! Yes, but I don't think it's easy. I think we might really need to adopt the design I proposed a while back with a global thread list that's unlocked atomically with respect to thread exit, using the exit futex. This is somewhat expensive/synchronizing, but it makes a lot of things easier/safer, and optimizes access to dynamic TLS. Rich