mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Rich Felker <dalias@libc.org>
To: Ingo Molnar <mingo@kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andy Lutomirski <luto@kernel.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Borislav Petkov <bp@alien8.de>,
	"musl@lists.openwall.com" <musl@lists.openwall.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: [musl] Re: [RFC PATCH] x86/vdso/32: Add AT_SYSINFO cancellation helpers
Date: Thu, 10 Mar 2016 20:55:00 -0500	[thread overview]
Message-ID: <20160311015500.GT9349@brightrain.aerifal.cx> (raw)
In-Reply-To: <20160311013946.GB29662@port70.net>

On Fri, Mar 11, 2016 at 02:39:47AM +0100, Szabolcs Nagy wrote:
> * Rich Felker <dalias@libc.org> [2016-03-10 19:48:59 -0500]:
> > On Fri, Mar 11, 2016 at 01:18:54AM +0100, Szabolcs Nagy wrote:
> > > * Rich Felker <dalias@libc.org> [2016-03-10 18:28:20 -0500]:
> > > > On Thu, Mar 10, 2016 at 07:03:31PM +0100, Ingo Molnar wrote:
> > > > > 
> > > > > The sticky signal is only ever sent when the thread is in cancellable state - and 
> > > > > if the target thread notices the cancellation request before the signal arrives, 
>         ^^^^^^...
> > > > > it first waits for its arrival before executing any new system calls (as part of 
>         ^^^^^^...
> > > > > the teardown, etc.).
> > > > > 
> > > > > So the C library never has to do complex work with a sticky signal pending.
> > > > > 
> > > > > Does that make more sense to you?
> > > > 
> > > > No, it doesn't work. Cancellability of the target thread at the time
> > > > of the cancellation request (when you would decide whether or not to
> > > > send the signal) has no relation to cancellability at the time of
> > > > calling the cancellation point. Consider 2 threads A and B and the
> > > > following sequence of events:
> > > > 
> > > > 1. A has cancellation enabled
> > > > 2. B calls pthread_cancel(A) and sets sticky pending signal
> > > > 3. A disables cancellation
> > > > 4. A calls cancellation point and syscall wrongly gets interrupted
> > > > 
> > > > This can be solved with more synchronization in pthread_cancel and
> > > > pthread_setcancelstate, but it seems costly. pthread_setcancelstate
> > > > would have to clear pending sticky cancellation signals, and any
> > > > internal non-cancellable syscalls would have to be made using the same
> > > > mechanism (effectively calling pthread_setcancelstate). A naive
> > > > implementation of such clearing would involve a syscall itself,
> > > 
> > > i think a syscall in setcancelstate in case of pending sticky signal
> > > is not that bad given that cancellation is very rarely used.
> > 
> > I agree, but it's not clear to me whether you could eliminate syscalls
> > in the case where it's not pending, since AS-safe lock machinery is
> > hard to get right. I don't see a way it can be done with just atomics
> > because the syscall that sends the signal cannot be atomic with the
> > memory operating setting a flag, which suggests a lock is needed, and
> > then there are all sorts of issues to deal with.
> 
> i think this is not a problem and the above marked text hints for
> a solution: just call pause() to wait for the sticky signal if
> self->cancelstate indicates that there is one comming or pending.

There are multiple problems with this approach, at least:

- pause does not 'consume' the signal; sigwaitinfo might.

- pause might return on a different signal that happens to arrive
  between setting the flag and sending the cancel signal

- If the thread calling pthread_cancel is interrupted by a signal
  after setting the flag but before sending the signal, the target
  thread may be arbitrarily delayed; in complex cases it may even
  deadlock. This should be easy to solve though by having
  pthread_cancel run with signals masked.

> t->cancelstate always have to be atomically modified but sending
> the sticky signal can be delayed (does not have to be atomic with
> the memory op).

Right.

> (of course there migth be other caveats and it certainly needs
> more atomic ops and more state than the current design)

I think it might be possible to do by having pthread_cancel run with
signals blocked and having sigwaitinfo consume the sticky signal if
the atomic-set cancellation-pending flag was seen, but I haven't
thought about all the corner cases of signal handlers and nested
cancellation points. POSIX might be making the behavior of the
affected cases undefined, though. So I think solving this might be
plausible, but nontrivial.

Rich


  parent reply	other threads:[~2016-03-11  1:55 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-09  1:24 Andy Lutomirski
2016-03-09  8:56 ` Ingo Molnar
2016-03-09 11:34   ` Szabolcs Nagy
2016-03-09 11:40     ` Szabolcs Nagy
2016-03-09 19:47     ` [musl] " Linus Torvalds
2016-03-09 20:57       ` Andy Lutomirski
2016-03-09 21:26         ` Linus Torvalds
2016-03-10 10:57         ` Ingo Molnar
2016-03-10  3:34       ` [musl] " Rich Felker
2016-03-10 11:16         ` Ingo Molnar
2016-03-10 16:41           ` Rich Felker
2016-03-10 18:03             ` Ingo Molnar
2016-03-10 23:28               ` [musl] " Rich Felker
2016-03-11  0:18                 ` Szabolcs Nagy
2016-03-11  0:48                   ` [musl] " Rich Felker
2016-03-11  1:14                     ` Andy Lutomirski
2016-03-11  1:39                     ` Szabolcs Nagy
2016-03-11  1:49                       ` Szabolcs Nagy
2016-03-11  1:55                       ` Rich Felker [this message]
2016-03-11  9:33                 ` Ingo Molnar
2016-03-11 11:39                   ` Szabolcs Nagy
2016-03-11 19:27                     ` Linus Torvalds
2016-03-11 19:30                       ` [musl] " Andy Lutomirski
2016-03-11 19:39                         ` Linus Torvalds
2016-03-11 19:44                           ` Linus Torvalds
2016-03-12 17:05                             ` Ingo Molnar
2016-03-12 18:10                               ` [musl] " Rich Felker
2016-03-12 17:00                       ` Ingo Molnar
2016-03-12 18:05                         ` [musl] " Rich Felker
2016-03-12 18:48                           ` Ingo Molnar
2016-03-12 19:08                             ` [musl] " Rich Felker
2016-03-12 17:08                     ` Ingo Molnar
2016-03-09 17:58 ` Andy Lutomirski
2016-03-09 21:19   ` Andy Lutomirski
2016-03-12 18:13     ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160311015500.GT9349@brightrain.aerifal.cx \
    --to=dalias@libc.org \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@kernel.org \
    --cc=musl@lists.openwall.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).