From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/5713 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: C threads, v3.0 Date: Mon, 4 Aug 2014 18:36:57 -0400 Message-ID: <20140804223657.GB1674@brightrain.aerifal.cx> References: <1407144603.8274.248.camel@eris.loria.fr> <20140804145050.GV1674@brightrain.aerifal.cx> <1407170939.8274.271.camel@eris.loria.fr> <20140804170637.GZ1674@brightrain.aerifal.cx> <1407190618.24324.148.camel@eris.loria.fr> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1407191839 25409 80.91.229.3 (4 Aug 2014 22:37:19 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Mon, 4 Aug 2014 22:37:19 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-5718-gllmg-musl=m.gmane.org@lists.openwall.com Tue Aug 05 00:37:11 2014 Return-path: Envelope-to: gllmg-musl@plane.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1XEQsR-0000fW-Pd for gllmg-musl@plane.gmane.org; Tue, 05 Aug 2014 00:37:11 +0200 Original-Received: (qmail 9883 invoked by uid 550); 4 Aug 2014 22:37:11 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 9868 invoked from network); 4 Aug 2014 22:37:10 -0000 Content-Disposition: inline In-Reply-To: <1407190618.24324.148.camel@eris.loria.fr> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:5713 Archived-At: On Tue, Aug 05, 2014 at 12:16:58AM +0200, Jens Gustedt wrote: > Am Montag, den 04.08.2014, 13:06 -0400 schrieb Rich Felker: > > On Mon, Aug 04, 2014 at 06:48:59PM +0200, Jens Gustedt wrote: > > > Am Montag, den 04.08.2014, 10:50 -0400 schrieb Rich Felker: > > > > On Mon, Aug 04, 2014 at 11:30:03AM +0200, Jens Gustedt wrote: > > > > > I'd like to discuss two issues before going further. The first is of > > > > > minor concern. In many places of that code _m_lock uses a flag, namely > > > > > 0x40000000. I only found places where this bit is read, but not where > > > > > it is set. Did I overlook something or is this a leftover? > > > > > > > > It's set by the kernel when a thread dies while owning a robust mutex.. > > > > > > ah, ok, good to know, this was not obvious at all to me. > > > > > > so I will leave this in, although C mutexes are not supposed to be > > > robust. > > > > If the C committee decides that the case of exiting with a mutex > > locked needs to be handled (i.e. the mutex must be permanently locked, > > not falsely owned by a new thread that happens to get the same id) > > then we need to track such mutexes in the robust list. > > I think it is very unlikely that such a decision would be taken. This > stuff is supposed to run on about any OS, not only POSIX. Imposing > such restrictions to other OS' is unlikely to happen. For the same > reason there is strong resistance against static initialization of > mutexes, e.g., simply because one well-known OS seems not to be able > to deal with it. I'm less confident. In principle any OS can have each thread put the mutexes it holds in a linked list and mark them all permanently-locked when the thread exits. Robust mutexes are hard on POSIX only because they have to support process-shared semantics where a process can be killed asynchronously by an uncatchable signal. > > Technically we > > wouldn't have to rely on the kernel to do anything with them (that > > only matters for process-shared case; for process-local, we control > > thread exit and can handle them ourselves) but letting the kernel do > > it does reduce the amount of code in pthread_exit, so it might be > > desirable still. > > ok, we definitively should leave it in, then. I haven't reviewed the actual code yet but I'll take a look and see if it looks like it would work. Actually making recursive and error-checking POSIX mutexes "always robust" (not exactly, but that's roughly how it would work) is a change I should make soon anyway. And I also have the private-futex patch pending, awaiting evidence that it's actually useful before applying it. > > > As I said, I think this is a feature that is rarely used anyhow. For > > > the code that actually links pthread_key_create, it is not at all > > > clear to me that pthread_setspecific is then used by a lot of > > > threads. I don't have stats for this, obviously, but the tss stuff > > > will even be used less. > > > > It's not that rare, and it's the only way in portable pre-C11 POSIX > > programs to use thread-local storage. Presumably most such programs > > check for the availability of gcc-style __thread and use it if > > possible, only falling back to the old pthread TSD if it's not. > > With C11 threads they are guaranteed to have _Thread_local. So the > check against __thread is then obsolete (and trivial). Yes. That's why I said pre-C11 POSIX programs. > > The only possible interaction we could care about would be if POSIX > > specified that PTHREAD_KEYS_MAX keys must be allocatable even if some > > C11 keys have already been allocated, and/or vice versa. In this case > > we could need to separately track the number of each and allow a > > higher total number. But I don't see how this would otherwise affect > > the requirements on, or implementation of, either one. > > The rules for calling destructors also might differ. For the moment C > is sufficiently vague that almost any strategy would do, but who knows > where this will drift. > > BTW, I think that the current implementation of > __pthread_tsd_run_dtors is not completely in line with the text. As I > understand it, each iteration should just treat the keys in one batch > per iteration. Each batch consisting of those keys that have values at > the beginning of that iteration. But that is hopefully completely > irrelevant in real world. If I'm not mistaken, I think the iterations macro is the minimum number of times a dtor will be called; it seems permissible for the implementation to call it more times. > > > We could implement tss separately. That's quite simple to do. > > > > Yes but that of course increases the cost of a largely-useless (except > > in the case of needing a dtor) feature, especially in per-thread > > memory needed, which I would like to avoid unless it turns out to be > > needed. > > I just implemented it, and if I use either that new implementation or > the one with pthread_key_create underneath, there is a difference of 8 > bytes for the executable. The difference would only appear for an > application that uses both features. OK I'll take a look. Rich