From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/13792 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: Draft outline of thread-list design Date: Thu, 14 Feb 2019 17:32:24 -0500 Message-ID: <20190214223224.GV23599@brightrain.aerifal.cx> References: <20190212182625.GA24199@brightrain.aerifal.cx> <66c00d56d718caa565209fd480158f98@ispras.ru> Reply-To: musl@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="96220"; mail-complaints-to="usenet@blaine.gmane.org" User-Agent: Mutt/1.5.21 (2010-09-15) Cc: musl@lists.openwall.com To: Alexey Izbyshev Original-X-From: musl-return-13808-gllmg-musl=m.gmane.org@lists.openwall.com Thu Feb 14 23:32:44 2019 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1guPYi-000Ouq-J1 for gllmg-musl@m.gmane.org; Thu, 14 Feb 2019 23:32:44 +0100 Original-Received: (qmail 32606 invoked by uid 550); 14 Feb 2019 22:32:42 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 32588 invoked from network); 14 Feb 2019 22:32:41 -0000 Content-Disposition: inline In-Reply-To: <66c00d56d718caa565209fd480158f98@ispras.ru> Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:13792 Archived-At: On Fri, Feb 15, 2019 at 12:16:39AM +0300, Alexey Izbyshev wrote: > On 2019-02-12 21:26, Rich Felker wrote: > >pthread_join: > > > >A joiner can no longer see the exit of the individual kernel thread > >via the exit futex (detach_state), so after seeing it in an exiting > >state, it must instead use the thread list to confirm completion of > >exit. The obvious way to do this is by taking a lock on the list and > >immediately releasing it, but the actual taking of the lock can be > >elided by simply doing a futex wait on the lock owner being equal to > >the tid (or an exit sequence number if we prefer that) of the exiting > >thread. In the case of tid reuse collisions, at worse this reverts to > >the cost of waiting for the lock to be released. > > > Since the kernel wakes only a single thread waiting on ctid address, > wouldn't the joiner still need to do a futex wake to unblock other > potential waiters even if it doesn't actually take the lock by > changing *ctid? I'm not sure. If it's just a single wake rather than a broadcast then yes, but only if it waited. If it observed the lock word != to the exiting thread tid without performing a futex wait then it doesn't have to do a futex wake. > > >__synccall: > > > >Take thread list lock. Signal each thread individually with tkill. > >Signaled threads no longer need to enqueue themselves on a list; they > >only need to wait until the signaling thread tells them to run the > >callback, and report back when they have finished it, which can be > >done via a single futex indicating whose turn it is to run. > >(Conceptually, this should not even be needed, since the signaling > >thread can just signal in sequence, but the intent is to be robust > >against spurious signals arriving from outside sources.) The idea is, > >for each thread: (1) set futex value to its tid, (2) send signal, (3) > >wait on futex to become 0 again. Signal handler simply returns if > >futex value != its tid, then runs the callback, then zeros the futex > >and performs a futex wake. Code should be tiny compared to now, and > >need not pull in any dependency on semaphores, PI futexes, etc. > > Wouldn't the handler also need to wait until *all* threads run the > callback? Otherwise, a thread might continue execution while its uid > still differs from uids of some other threads. Yes, that's correct. We actually do need the current approach of first capturing all the threads in a signal handler, then making them run the callback, then releasing them to return, in three rounds. No application code should be able to run with the process in a partially-mutated state. > In general, to my limited expertise, the design looks simple and > clean. I'm not sure whether it's worth optimizing to reduce > serialization pressure on pthread_create()/pthread_exit() because > creating a large amount of short-lived threads doesn't look like a > good idea anyway. Yes. One thing I did notice is that the window where pthread_create has to hold a lock to prevent new dlopen from happening is a lot larger than the window where the thread list needs to be locked, and contains mmap/mprotect. I think we should add a new "DTLS lock" here that's held for the whole time, with a protocol that if you need both the DTLS lock and the thread list lock, you take them in that order (dlopen would also need them both). This reduces the thread list lock window to just the __clone call and list update. Rich