mailing list of musl libc
 help / color / mirror / code / Atom feed
From: Solar Designer <solar@openwall.com>
To: musl@lists.openwall.com
Subject: Re: tough choice on thread pointer initialization issue
Date: Fri, 10 Feb 2012 14:42:52 +0400	[thread overview]
Message-ID: <20120210104252.GA14485@openwall.com> (raw)
In-Reply-To: <20120210075818.GG146@brightrain.aerifal.cx>

On Fri, Feb 10, 2012 at 02:58:18AM -0500, Rich Felker wrote:
> On Fri, Feb 10, 2012 at 11:40:02AM +0400, Solar Designer wrote:
> > approach 4: initialize the thread pointer register to zero at program
> > startup.  Then before its use, just check it for non-zero instead of
> > checking a global flag.  (I presume that you're doing the latter now,
> > based on your description of the problem.)
> 
> Well this is definitely feasible but it makes more arch-specific code,
> since examining the thread register needs asm.

But accessing it needs asm anyway, no?

> I suspect "mov %gs,%ax" might be slow on some machines, too,

It might be.  On the other hand, segment registers are read (saved) on
context switch, so this is not something CPU makers would disregard.
I wouldn't be surprised if some CPU has faster push than mov, though.

> since the segment registers
> aren't intended to be read/written often. I just don't know enough to
> know if it's a good approach; if you really think it's potentially the
> best, perhaps you could provide more info.

"Potentially" is the keyword here, but anyway I just did some testing on
a Core 2'ish CPU (E5420), and the instruction is very fast.  I put 1000
instances of:

movl %ss,%eax
testl %eax,%eax
jz 0

one after another, and this runs in just 1 cycle per the three
instructions above (so 1000 cycles for the 1000 instances total).
Of course, there are data dependencies here, so there's probably a
bypass involved and the instructions are grouped differently, like:

Cycle N:
testl %eax,%eax; jz 0; movl %ss,%eax

Cycle N+1:
testl %eax,%eax; jz 0; movl %ss,%eax

with a bypass between testl and jz.  (I am just guessing, though.)

In actual code (where you won't have repeats like this), you might want
to insert some instructions between the mov and the test (if you can).

I also tested 1000 instances of:

movl %gs,%eax

and:

movw %gs,%ax

and:

movw %gs,%ax
testw %ax,%ax

All of these execute in 1000 cycles total as well.  With "w" forms of
the instructions there are extra prefixes, so I think these should
better be avoided, even though there's no slowdown from them on this CPU.

Alexander


  reply	other threads:[~2012-02-10 10:42 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-10  2:58 Rich Felker
2012-02-10  7:40 ` Solar Designer
2012-02-10  7:58   ` Rich Felker
2012-02-10 10:42     ` Solar Designer [this message]
2012-02-10 11:22       ` Solar Designer
2012-02-10 17:00       ` Rich Felker
2012-02-10 19:12         ` Solar Designer
2012-02-25  6:56 ` Rich Felker
2012-02-25 13:32   ` Rich Felker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120210104252.GA14485@openwall.com \
    --to=solar@openwall.com \
    --cc=musl@lists.openwall.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).