mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Jens Gustedt <jens.gustedt@inria.fr>
To: musl@lists.openwall.com
Subject: Re: PATCH: don't call cleanup handlers after a regular return from the thread start function
Date: Wed, 06 Aug 2014 01:19:58 +0200	[thread overview]
Message-ID: <1407280798.24324.262.camel@eris.loria.fr> (raw)
In-Reply-To: <20140805214827.GO1674@brightrain.aerifal.cx>

[-- Attachment #1: Type: text/plain, Size: 5551 bytes --]

Am Dienstag, den 05.08.2014, 17:48 -0400 schrieb Rich Felker:
> On Tue, Aug 05, 2014 at 11:05:41PM +0200, Jens Gustedt wrote:
> > It is not explicitly forbidden and I think this is sufficient to allow
> > for it.
> 
> Actually I don't think it's possible to implement a call_once that
> supports longjmp out without a longjmp which does unwinding,

simple longjmp, perhaps not, longjmp that is followed by thread
termination, yes. I just have done it, so it is possible :)

As I mentioned earlier, you can do this by having everything
statically allocated. I am using a struct for once_flag that has the
`int` for control, another `int` for waiters and a `struct __ptcb` for
the call to _pthread_cleanup_push. call_once is just a copy of
pthread_once, just using these fields of the once_flag, instead of the
local variables.

(Strictly spoken, once_flag is not required to be statically
allocated, as is the explicit requirement for pthread_once_t. Yet
another omission of C11, it is on my list for addition.)

> and I
> don't think it's the intent of WG14 to require unwinding. The
> fundamental problem is the lack of any way to synchronize "all
> subsequent calls to the call_once function with the same value of
> flag" when the code to update flag to indicate completion of the call
> is skipped.

sure

> Do you see any solution to this problem? Perhaps a conforming
> implementation can just deadlock all subsequent calls, since (1)
> calling the function more than once conflicts with the "exactly once"
> requirement, and (2) the completion of the first call, which
> subsequent calls have to synchronize with, never happens.

one part of solution is the "static" implementation as mentioned
above, followed by a fight to get the requirement for static
allocation and interdiction of longjmp and thrd_exit into the
specification of call_once.

> - In the case of longjmp, it's already dangerous to longjmp across
>   arbitrary code. Unless you know what code you're jumping over is
>   doing or have a contract that it's valid to jump over it, you can't
>   use longjmp.

sure, but just consider the nice tool-library that implements
try-catch sort of things with setjmp/longjmp.

> > For the case of pthread_once or call_once, we could get away with it
> > by having the `struct __ptcb` static at that place. The caller of the
> > callback holds a lock at the point of call, so there would be no race
> > on such a static object.
> > 
> > I will prepare a patch for call_once to do so, just to be on the safe
> > side. This would incur 0 cost.
> 
> The cost is encoding knowledge of the cancellation implementation at
> the point of the call, and synchronizing all calls to call_once with
> respect to each other, even when they don't use the same once_flag
> object.

No, I don't think so, and with what I explained above this is perhaps
a bit clearer. The only thing that this does is making all the state
of a particular call to call_once statically allocated.

(There is even less interaction between calls to call_once with
different objects, than we have for the current implementation of
pthread_once. That one uses a `static int waiters` that is shared
between all calls.)

> call_once does not have specified behavior if the callback is
> cancelled (because, as far as C11 is concerned, cancellation does not
> even exist). Thus there is no need for a cleanup function at all.

After digging into it I don't agree. In a mixed pthread/C thread
environment cancellation will eventually happen, so we need a cleanup
function to avoid deadlocks.

> I think we should wait to do anything unneccessary, ugly, and/or
> complex here unless the next issue of POSIX specifies a behavior for
> this case.

too late, done already, and it is not too ugly, either

> > The same could be used for pthread_once.
> 
> Above analysis about longjmp being impossible to support without
> deadlock or unwinding in longjmp also applies to pthread_once. I
> suspect, if longjmp is implicitly allowed, this is just an oversight
> and one which will be corrected.

right

also introducing the same idea as above would be an ABI change, so
let's not touch this

> We're talking about a return statement in between the
> pthread_cleanup_push and pthread_cleanup_pop calls in a particular
> block context. Third-party library code does not magically insert such
> a statement in your code. So I don't understand the case you have in
> mind.

No it is not only return, it also is longjmp. As mentioned above,
consider a pure C11 context where I use a tool box that implements
try-catch things by means of setjmp/longjmp.

Then I use another, independent library that uses callbacks, that can
be graphics, OpenMp, libc, whatever. And perhaps on some POSIX system
it uses pthread_cleanup_push under the hood.

It will be hard to ensure that both tools when used together on an
arbitrary platform that implements them will not collide with the
requirement not to use longjmp across a pthread_cleanup context.

(BTW, in the musl timer code I just detected a nasty usage of longjmp
from within a cleanup handler. That one is ugly :)

Jens

-- 
:: INRIA Nancy Grand Est ::: AlGorille ::: ICube/ICPS :::
:: ::::::::::::::: office Strasbourg : +33 368854536   ::
:: :::::::::::::::::::::: gsm France : +33 651400183   ::
:: ::::::::::::::: gsm international : +49 15737185122 ::
:: http://icube-icps.unistra.fr/index.php/Jens_Gustedt ::



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

  reply	other threads:[~2014-08-05 23:19 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-05 16:51 Jens Gustedt
2014-08-05 17:09 ` Rich Felker
2014-08-05 19:06   ` Jens Gustedt
2014-08-05 19:41     ` Rich Felker
2014-08-05 20:29       ` Rich Felker
2014-08-05 21:05       ` Jens Gustedt
2014-08-05 21:48         ` Rich Felker
2014-08-05 23:19           ` Jens Gustedt [this message]
2014-08-06  2:02             ` Rich Felker
2014-08-06  7:15               ` Jens Gustedt
2014-08-06  9:35                 ` Rich Felker
2014-08-06 10:00                   ` Jens Gustedt
2014-08-06 10:12                     ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1407280798.24324.262.camel@eris.loria.fr \
    --to=jens.gustedt@inria.fr \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).