mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Markus Wichmann <nullplan@gmx.net>
To: musl@lists.openwall.com
Subject: Re: sem_wait and EINTR
Date: Sun, 9 Dec 2018 07:50:33 +0100	[thread overview]
Message-ID: <20181209065033.GB2554@voyager> (raw)
In-Reply-To: <20181209025140.GL23599@brightrain.aerifal.cx>

On Sat, Dec 08, 2018 at 09:51:40PM -0500, Rich Felker wrote:
> Conceptually the logic looks correct, but I'm trying to phase out the
> __libc structure,

Oh? Well, I suppose it does offer no benefit the linker doesn't already
give you. I suppose we could make the flag a static int in sigaction(),
with a hidden getter function.

> and more importantly, introducing a bitfield here is
> not safe unless access to all bitfield members is controlled by a
> common mutex or other synchronization mechanism. There is a data race
> if any access to the other fields is accessed when sigaction is
> callable from another thread, and the impact of a data race on any of
> these is critical.
> 

The flags can_do_thrads, threaded, and secure are all set before the
program becomes multi-threaded and they keep their values throughout the
rest of the program. So the only accesses are read accesses.

handling_sigs on the other hand might be written by multiple threads at
the same time, but to the same value. Can that ever be an issue?

Wait... I think I figured it out. On some architectures you can have
incoherent caches. Threads need to synchronize to access shared
ressources, and those synchronization functions take care of that
problem, but I circumvented them. Now, if the caches are set to
write-back mode (which is common), then writing to handling_sigs only
writes to the cache line for the address of handling_sigs, but not to
main memory. So another thread on another core will not see the update.

Worse: If by a bad roll of the dice the writing thread's cache line is
evicted before the reading thread's cache line is (it might have written
somewhere else in that cache line), that latter eviction will overwrite
handling_sigs back to 0.

Am I close?

> Unfortunately, even without the bitfield, there is a data race between
> access to the new flag from sem_timedwait and from sigaction. In
> theory, this potentially results in a race window where sem_wait
> retries on EINTR because it doesn't yet see the updated handling_sigs.
> 

In that case, the sem_wait() and the sigaction() would be unserialized,
and any correct program cannot depend on the order of operations. In
particular, the caller if the sem_wait() can't depend on the sigaction()
having installed the signal handler yet.

> The "right" fix would be to use an AS-safe lock to protect the
> handling_sigs object, or make it atomic (a_store on the sigaction
> side, a_barrier before load on the sem side). Alternatively we could
> assume the above reasoning about sequencing of sigaction before EINTR
> and just use a lock on the sigaction side, but the atomic is probably
> simplest and "safer".
> 

Plus, a lock for a single int that can only ever go from 0 to 1 would be
a bit overkill.

> A couple other small nits: comments especially should not exceed 80
> columns (preferably <75 or so) assuming tab width 8; code should avoid
> it too but I see it slightly does in a few places there. Spaces should
> not be used for indention.

Ah crap. I forgot the comment the first time I was in the file, and when
I started vim again, I forgot to re-apply the musl settings.

> And the commit message should reflect the
> change made; it's not adding a workaround, but reducing the scope of a
> previous workaround (which should be cited by commit id) to fix a
> conformance bug. The name "handling_sigs" is also a bit misleading;
> what it's indicating is that interrupting signal handlers are or have
> been present.
> 
> I'm happy to fix up these issues and commit if there aren't mistakes
> in the above comments that still need to be addressed.
> 
> Rich

Well, it certainly was a learning experience for me. Just goes to show
that you never learn as much as when you're making a mistake and have to
figure it out.

Ciao,
Markus


  reply	other threads:[~2018-12-09  6:50 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-05 19:16 Orivej Desh
2018-12-05 19:47 ` Markus Wichmann
2018-12-05 21:27   ` Ondřej Jirman
2018-12-05 21:58     ` Rich Felker
2018-12-06  2:43       ` Orivej Desh
2018-12-06  3:17         ` Rich Felker
2018-12-06 15:57           ` Markus Wichmann
2018-12-06 16:23             ` Rich Felker
2018-12-06 17:03               ` Markus Wichmann
2018-12-06 17:33                 ` Markus Wichmann
2018-12-06 20:31                   ` Orivej Desh
2018-12-09  2:51                   ` Rich Felker
2018-12-09  6:50                     ` Markus Wichmann [this message]
2018-12-12  0:32                       ` Rich Felker
2018-12-12  5:15                         ` Markus Wichmann
2018-12-14 19:45                           ` Rich Felker
2018-12-15  9:45                             ` Markus Wichmann
2018-12-05 22:03 ` Rich Felker
2018-12-06  2:43   ` Orivej Desh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181209065033.GB2554@voyager \
    --to=nullplan@gmx.net \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).