From: Denys Vlasenko <vda.linux@googlemail.com>
To: musl <musl@lists.openwall.com>
Subject: Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
Date: Thu, 12 Feb 2015 21:36:26 +0100 [thread overview]
Message-ID: <CAK1hOcOvpmNVO=HPy40N2StZHDt0LNaKxw5_ZjJSZ+_47Av-eg@mail.gmail.com> (raw)
In-Reply-To: <CAK1hOcPVMiQo=fdRJSSMH6vx22aDbTcxjf_tFVphhNE8Szi6RQ@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 811 bytes --]
On Thu, Feb 12, 2015 at 8:26 PM, Denys Vlasenko
<vda.linux@googlemail.com> wrote:
>> I'd actually like to extend the "short" range up to at least 32 bytes
>> using two 8-byte writes for the middle, unless the savings from using
>> 32-bit imul instead of 64-bit are sufficient to justify 4 4-byte
>> writes for the middle. On the cpu I tested on, the difference is 11
>> cycles vs 32 cycles for non-rep path versus rep path at size 32.
>
> The short path causes mixed feelings in me.
>
> On one hand, it's elegant in a contrived way.
>
> On the other hand, multiple
> overlaying stores must be causing hell in store unit.
> I'm thinking, maybe there's a faster way to do that.
For example, like in the attached implementation.
This one will not perform eight stores to memory
to fill 15 byte area... only two.
[-- Attachment #2: memset.s --]
[-- Type: application/octet-stream, Size: 1136 bytes --]
.global memset
.type memset,@function
memset:
movzbq %sil,%rax
test %esi,%esi
jnz .L_widen_rax # unlikely
.L_widened:
mov %rdi,%r8
cmp $16,%rdx
jbe .Less_than_or_equal_16
test $7,%dil
jnz .L_align # unlikely
.L_aligned:
lea -1(%rdx),%rcx
shr $3,%rcx
mov %rax,-8(%rdi,%rdx)
rep
stosq
mov %r8,%rax
ret
.L_widen_rax:
# 64-bit imul has 3-7 cycles latency
mov $0x101010101010101,%rsi
imul %rsi,%rax
jmp .L_widened
# 8-byte alignment gives ~25% speedup on "rep stosq" memsets
# to L1 cache, compared to intentionally misaligned ones.
# It is a smaller win of ~15% on larger memsets to L2 too.
# Measured on Intel Sandy Bridge CPU (i7-2620M, 2.70GHz)
.L_align:
mov %rax,(%rdi)
1: inc %rdi
dec %rdx
test $7,%dil
jnz 1b
jmp .L_aligned
.Less_than_or_equal_16:
jb 1f
# fill 8-16 bytes:
0: mov %rax,(%rdi)
mov %rax,-8(%rdi,%rdx)
mov %r8,%rax
ret
1: test $8,%dl
jnz 0b
test $4,%dl
jz 1f
# fill 4-7 bytes:
mov %eax,(%rdi)
mov %eax,-4(%rdi,%rdx)
mov %r8,%rax
ret
# fill 0-3 bytes:
1: test $2,%dl
jz 1f
mov %ax,(%rdi)
add $2,%rdi
1: test $1,%dl
jz 1f
mov %al,(%rdi)
1: mov %r8,%rax
ret
next prev parent reply other threads:[~2015-02-12 20:36 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-12 17:17 Denys Vlasenko
2015-02-12 17:17 ` [PATCH 2/2] x86_64/memset: align destination to 8 byte boundary Denys Vlasenko
2015-02-12 17:27 ` [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Rich Felker
2015-02-12 19:26 ` Denys Vlasenko
2015-02-12 20:36 ` Denys Vlasenko [this message]
2015-02-13 7:24 ` Rich Felker
2015-02-13 16:38 ` Denys Vlasenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAK1hOcOvpmNVO=HPy40N2StZHDt0LNaKxw5_ZjJSZ+_47Av-eg@mail.gmail.com' \
--to=vda.linux@googlemail.com \
--cc=musl@lists.openwall.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.vuxu.org/mirror/musl/
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).