From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/13744 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: __synccall: deadlock and reliance on racy /proc/self/task Date: Sat, 9 Feb 2019 19:52:50 -0500 Message-ID: <20190210005250.GZ23599@brightrain.aerifal.cx> References: <1cc54dbe2e4832d804184f33cda0bdd1@ispras.ru> <20190207183626.GQ23599@brightrain.aerifal.cx> <20190208183357.GX23599@brightrain.aerifal.cx> <20190209162101.GN21289@port70.net> <6e0306699add531af519843de20c343a@ispras.ru> <20190209214045.GO21289@port70.net> Reply-To: musl@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="94716"; mail-complaints-to="usenet@blaine.gmane.org" User-Agent: Mutt/1.5.21 (2010-09-15) To: musl@lists.openwall.com, Alexey Izbyshev Original-X-From: musl-return-13760-gllmg-musl=m.gmane.org@lists.openwall.com Sun Feb 10 01:53:06 2019 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1gsdMo-000OVE-LJ for gllmg-musl@m.gmane.org; Sun, 10 Feb 2019 01:53:06 +0100 Original-Received: (qmail 11274 invoked by uid 550); 10 Feb 2019 00:53:04 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 10232 invoked from network); 10 Feb 2019 00:53:03 -0000 Content-Disposition: inline In-Reply-To: <20190209214045.GO21289@port70.net> Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:13744 Archived-At: On Sat, Feb 09, 2019 at 10:40:45PM +0100, Szabolcs Nagy wrote: > * Alexey Izbyshev [2019-02-09 21:33:32 +0300]: > > On 2019-02-09 19:21, Szabolcs Nagy wrote: > > > * Rich Felker [2019-02-08 13:33:57 -0500]: > > > > On Fri, Feb 08, 2019 at 09:14:48PM +0300, Alexey Izbyshev wrote: > > > > > On 2/7/19 9:36 PM, Rich Felker wrote: > > > > > >Does it work if we force two iterations of the readdir loop with no > > > > > >tasks missed, rather than just one, to catch the case of missed > > > > > >concurrent additions? I'm not sure. But all this makes me really > > > > > >uncomfortable with the current approach. > > > > > > > > > > I've tested with 0, 1, 2 and 3 retries of the main loop if miss_cnt > > > > > == 0. The test eventually failed in all cases, with 0 retries > > > > > requiring only a handful of iterations, 1 -- on the order of 100, 2 > > > > > -- on the order of 10000 and 3 -- on the order of 100000. > > > > > > > > Do you have a theory on the mechanism of failure here? I'm guessing > > > > it's something like this: there's a thread that goes unseen in the > > > > first round, and during the second round, it creates a new thread and > > > > exits itself. The exit gets seen (again, it doesn't show up in the > > > > dirents) but the new thread it created still doesn't. Is that right? > > > > > > > > In any case, it looks like the whole mechanism we're using is > > > > unreliable, so something needs to be done. My leaning is to go with > > > > the global thread list and atomicity of list-unlock with exit. > > > > > > yes that sounds possible, i added some instrumentation to musl > > > and the trace shows situations like that before the deadlock, > > > exiting threads can even cause old (previously seen) entries to > > > disappear from the dir. > > > > > Thanks for the thorough instrumentation! Your traces confirm both my theory > > about the deadlock and unreliability of /proc/self/task. > > > > I'd also done a very light instrumentation just before I got your email, but > > it took me a while to understand the output I got (see below). > > the attached patch fixes the issue on my machine. > i don't know if this is just luck. > > the assumption is that if /proc/self/task is read twice such that > all tids in it seem to be active and caught, then all the active > threads of the process are caught (no new threads that are already > started but not visible there yet) I'm skeptical of whether this should work in principle. If the first scan of /proc/self/task misses tid J, and during the next scan, tid J creates tid K then exits, it seems like we could see the same set of tids on both scans. Maybe it's salvagable though. Since __block_new_threads is true, in order for this to happen, tid J must have been between the __block_new_threads check in pthread_create and the clone syscall at the time __synccall started. The number of threads in such a state seems to be bounded by some small constant (like 2) times libc.threads_minus_1+1, computed at any point after __block_new_threads is set to true, so sufficiently heavy presignaling (heavier than we have now) might suffice to guarantee that all are captured. Rich