mailing list of musl libc
 help / color / mirror / code / Atom feed
* [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
@ 2015-02-12 17:17 Denys Vlasenko
  2015-02-12 17:17 ` [PATCH 2/2] x86_64/memset: align destination to 8 byte boundary Denys Vlasenko
  2015-02-12 17:27 ` [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Rich Felker
  0 siblings, 2 replies; 7+ messages in thread
From: Denys Vlasenko @ 2015-02-12 17:17 UTC (permalink / raw)
  To: musl, Rich Felker; +Cc: Denys Vlasenko

memset is very, very often called with fill=0,
and 64-bit imul is expensive on many CPUs.
Avoid it if fill=0.

Also avoid multiply on "short memset" codepath if possible,
and when we do need it, use 32-bit one, which is cheaper on many CPUs.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
---
 src/string/x86_64/memset.s | 34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s
index 3cc8fcf..523caa0 100644
--- a/src/string/x86_64/memset.s
+++ b/src/string/x86_64/memset.s
@@ -1,13 +1,12 @@
 .global memset
 .type memset,@function
 memset:
-	movzbl %sil,%esi
-	mov $0x101010101010101,%rax
-	# 64-bit imul has 3-7 cycles latency, launch early
-	imul %rsi,%rax
-
+	movzbq %sil,%rax
 	cmp $16,%rdx
-	jb 1f
+	jb .Less_than_16
+	test %esi,%esi
+	jnz .L_widen_rax  # unlikely
+.L_widened:
 
 	lea -1(%rdx),%rcx
 	mov %rdi,%r8
@@ -18,26 +17,35 @@ memset:
 	mov %r8,%rax
 	ret
 
-1:	test %edx,%edx
-	jz 1f
+.L_widen_rax:
+	# 64-bit imul has 3-7 cycles latency
+	mov $0x101010101010101,%rsi
+	imul %rsi,%rax
+	jmp .L_widened
+
+.Less_than_16:
+	test %edx,%edx
+	jz .L_ret
 
 	mov %al,(%rdi)
 	mov %al,-1(%rdi,%rdx)
 	cmp $2,%edx
-	jbe 1f
+	jbe .L_ret
 
 	mov %al,1(%rdi)
 	mov %al,-2(%rdi,%rdx)
+	# 32-bit imul has 3-4 cycles latency
+	imul $0x1010101,%eax
 	cmp $4,%edx
-	jbe 1f
+	jbe .L_ret
 
 	mov %eax,(%rdi)
 	mov %eax,-4(%rdi,%rdx)
 	cmp $8,%edx
-	jbe 1f
+	jbe .L_ret
 
 	mov %eax,4(%rdi)
 	mov %eax,-8(%rdi,%rdx)
-
-1:	mov %rdi,%rax
+.L_ret:
+	mov %rdi,%rax
 	ret
-- 
1.8.1.4



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 2/2] x86_64/memset: align destination to 8 byte boundary
  2015-02-12 17:17 [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Denys Vlasenko
@ 2015-02-12 17:17 ` Denys Vlasenko
  2015-02-12 17:27 ` [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Rich Felker
  1 sibling, 0 replies; 7+ messages in thread
From: Denys Vlasenko @ 2015-02-12 17:17 UTC (permalink / raw)
  To: musl, Rich Felker; +Cc: Denys Vlasenko

8-byte alignment gives ~25% speedup on "rep stosq" memsets
to L1 cache, compared to intentionally misaligned ones.
It is a smaller win of ~15% on larger memsets to L2 too.
Measured on Intel Sandy Bridge CPU (i7-2620M, 2.70GHz)

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
---
 src/string/x86_64/memset.s | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s
index 523caa0..5c9e333 100644
--- a/src/string/x86_64/memset.s
+++ b/src/string/x86_64/memset.s
@@ -4,16 +4,23 @@ memset:
 	movzbq %sil,%rax
 	cmp $16,%rdx
 	jb .Less_than_16
+
 	test %esi,%esi
 	jnz .L_widen_rax  # unlikely
 .L_widened:
 
-	lea -1(%rdx),%rcx
 	mov %rdi,%r8
+
+	test $7,%dil
+	jnz .L_align  # unlikely
+.L_aligned:
+
+	lea -1(%rdx),%rcx
 	shr $3,%rcx
 	mov %rax,-8(%rdi,%rdx)
 	rep
 	stosq
+
 	mov %r8,%rax
 	ret
 
@@ -23,6 +30,19 @@ memset:
 	imul %rsi,%rax
 	jmp .L_widened
 
+# 8-byte alignment gives ~25% speedup on "rep stosq" memsets
+# to L1 cache, compared to intentionally misaligned ones.
+# It is a smaller win of ~15% on larger memsets to L2 too.
+# Measured on Intel Sandy Bridge CPU (i7-2620M, 2.70GHz)
+.L_align:
+	mov %rax,(%rdi)
+1:	inc %rdi
+	dec %rdx
+	test $7,%dil
+	jnz 1b
+	jmp .L_aligned
+
+
 .Less_than_16:
 	test %edx,%edx
 	jz .L_ret
-- 
1.8.1.4



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
  2015-02-12 17:17 [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Denys Vlasenko
  2015-02-12 17:17 ` [PATCH 2/2] x86_64/memset: align destination to 8 byte boundary Denys Vlasenko
@ 2015-02-12 17:27 ` Rich Felker
  2015-02-12 19:26   ` Denys Vlasenko
  1 sibling, 1 reply; 7+ messages in thread
From: Rich Felker @ 2015-02-12 17:27 UTC (permalink / raw)
  To: musl

On Thu, Feb 12, 2015 at 06:17:02PM +0100, Denys Vlasenko wrote:
> memset is very, very often called with fill=0,
> and 64-bit imul is expensive on many CPUs.
> Avoid it if fill=0.
> 
> Also avoid multiply on "short memset" codepath if possible,
> and when we do need it, use 32-bit one, which is cheaper on many CPUs.

I'd actually like to extend the "short" range up to at least 32 bytes
using two 8-byte writes for the middle, unless the savings from using
32-bit imul instead of 64-bit are sufficient to justify 4 4-byte
writes for the middle. On the cpu I tested on, the difference is 11
cycles vs 32 cycles for non-rep path versus rep path at size 32.

> Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
> ---
>  src/string/x86_64/memset.s | 34 +++++++++++++++++++++-------------
>  1 file changed, 21 insertions(+), 13 deletions(-)
> 
> diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s
> index 3cc8fcf..523caa0 100644
> --- a/src/string/x86_64/memset.s
> +++ b/src/string/x86_64/memset.s
> @@ -1,13 +1,12 @@
>  .global memset
>  .type memset,@function
>  memset:
> -	movzbl %sil,%esi
> -	mov $0x101010101010101,%rax
> -	# 64-bit imul has 3-7 cycles latency, launch early
> -	imul %rsi,%rax
> -
> +	movzbq %sil,%rax
>  	cmp $16,%rdx
> -	jb 1f
> +	jb .Less_than_16
> +	test %esi,%esi
> +	jnz .L_widen_rax  # unlikely
> +.L_widened:

Generally I prefer unnamed labels, but I think you've reached the
level where the code is getting confusing without names, so I'm fine
with having them here.

Once you're done with improvements to the 64-bit version, do you plan
to adapt the same principles to the 32-bit version? Or should I or
someone else work on that later?

Rich


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
  2015-02-12 17:27 ` [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Rich Felker
@ 2015-02-12 19:26   ` Denys Vlasenko
  2015-02-12 20:36     ` Denys Vlasenko
  0 siblings, 1 reply; 7+ messages in thread
From: Denys Vlasenko @ 2015-02-12 19:26 UTC (permalink / raw)
  To: musl

On Thu, Feb 12, 2015 at 6:27 PM, Rich Felker <dalias@libc.org> wrote:
> On Thu, Feb 12, 2015 at 06:17:02PM +0100, Denys Vlasenko wrote:
>> memset is very, very often called with fill=0,
>> and 64-bit imul is expensive on many CPUs.
>> Avoid it if fill=0.
>>
>> Also avoid multiply on "short memset" codepath if possible,
>> and when we do need it, use 32-bit one, which is cheaper on many CPUs.
>
> I'd actually like to extend the "short" range up to at least 32 bytes
> using two 8-byte writes for the middle, unless the savings from using
> 32-bit imul instead of 64-bit are sufficient to justify 4 4-byte
> writes for the middle. On the cpu I tested on, the difference is 11
> cycles vs 32 cycles for non-rep path versus rep path at size 32.

The short path causes mixed feelings in me.

On one hand, it's elegant in a contrived way.

On the other hand, multiple
overlaying stores must be causing hell in store unit.
I'm thinking, maybe there's a faster way to do that.

Can you post your memset testsuite?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
  2015-02-12 19:26   ` Denys Vlasenko
@ 2015-02-12 20:36     ` Denys Vlasenko
  2015-02-13  7:24       ` Rich Felker
  0 siblings, 1 reply; 7+ messages in thread
From: Denys Vlasenko @ 2015-02-12 20:36 UTC (permalink / raw)
  To: musl

[-- Attachment #1: Type: text/plain, Size: 811 bytes --]

On Thu, Feb 12, 2015 at 8:26 PM, Denys Vlasenko
<vda.linux@googlemail.com> wrote:
>> I'd actually like to extend the "short" range up to at least 32 bytes
>> using two 8-byte writes for the middle, unless the savings from using
>> 32-bit imul instead of 64-bit are sufficient to justify 4 4-byte
>> writes for the middle. On the cpu I tested on, the difference is 11
>> cycles vs 32 cycles for non-rep path versus rep path at size 32.
>
> The short path causes mixed feelings in me.
>
> On one hand, it's elegant in a contrived way.
>
> On the other hand, multiple
> overlaying stores must be causing hell in store unit.
> I'm thinking, maybe there's a faster way to do that.

For example, like in the attached implementation.

This one will not perform eight stores to memory
to fill 15 byte area... only two.

[-- Attachment #2: memset.s --]
[-- Type: application/octet-stream, Size: 1136 bytes --]

.global memset
.type memset,@function
memset:
	movzbq %sil,%rax
	test %esi,%esi
	jnz .L_widen_rax  # unlikely
.L_widened:

	mov %rdi,%r8

	cmp $16,%rdx
	jbe .Less_than_or_equal_16

	test $7,%dil
	jnz .L_align  # unlikely
.L_aligned:

	lea -1(%rdx),%rcx
	shr $3,%rcx
	mov %rax,-8(%rdi,%rdx)
	rep
	stosq

	mov %r8,%rax
	ret

.L_widen_rax:
	# 64-bit imul has 3-7 cycles latency
	mov $0x101010101010101,%rsi
	imul %rsi,%rax
	jmp .L_widened

# 8-byte alignment gives ~25% speedup on "rep stosq" memsets
# to L1 cache, compared to intentionally misaligned ones.
# It is a smaller win of ~15% on larger memsets to L2 too.
# Measured on Intel Sandy Bridge CPU (i7-2620M, 2.70GHz)
.L_align:
	mov %rax,(%rdi)
1:	inc %rdi
	dec %rdx
	test $7,%dil
	jnz 1b
	jmp .L_aligned


.Less_than_or_equal_16:
	jb 1f
    # fill 8-16 bytes:
0:	mov %rax,(%rdi)
	mov %rax,-8(%rdi,%rdx)
	mov %r8,%rax
	ret
1:	test $8,%dl
	jnz 0b

	test $4,%dl
	jz 1f
    # fill 4-7 bytes:
	mov %eax,(%rdi)
	mov %eax,-4(%rdi,%rdx)
	mov %r8,%rax
	ret

    # fill 0-3 bytes:
1:	test $2,%dl
	jz 1f
	mov %ax,(%rdi)
	add $2,%rdi
1:	test $1,%dl
	jz 1f
	mov %al,(%rdi)
1:	mov %r8,%rax
	ret

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
  2015-02-12 20:36     ` Denys Vlasenko
@ 2015-02-13  7:24       ` Rich Felker
  2015-02-13 16:38         ` Denys Vlasenko
  0 siblings, 1 reply; 7+ messages in thread
From: Rich Felker @ 2015-02-13  7:24 UTC (permalink / raw)
  To: musl

On Thu, Feb 12, 2015 at 09:36:26PM +0100, Denys Vlasenko wrote:
> On Thu, Feb 12, 2015 at 8:26 PM, Denys Vlasenko
> <vda.linux@googlemail.com> wrote:
> >> I'd actually like to extend the "short" range up to at least 32 bytes
> >> using two 8-byte writes for the middle, unless the savings from using
> >> 32-bit imul instead of 64-bit are sufficient to justify 4 4-byte
> >> writes for the middle. On the cpu I tested on, the difference is 11
> >> cycles vs 32 cycles for non-rep path versus rep path at size 32.
> >
> > The short path causes mixed feelings in me.
> >
> > On one hand, it's elegant in a contrived way.
> >
> > On the other hand, multiple
> > overlaying stores must be causing hell in store unit.
> > I'm thinking, maybe there's a faster way to do that.

In practice it performs quite well. x86's are good at this. The
generic C code in memset.c does not do any overlapping writes of
different sizes for the short buffer code path -- all writes there are
single-byte, and multiple-write only happens for some of the inner
bytes depending on the value of n.

> For example, like in the attached implementation.
> 
> This one will not perform eight stores to memory
> to fill 15 byte area... only two.

I could try comparing its performance, but I expect branches to cost a
lot more than redundant stores to cached memory. My approach in the C
code seems to be the absolute minimum possible number of branches for
short memsets, and it pays off -- it's even faster than the current
asm for these small sizes.

Rich


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible
  2015-02-13  7:24       ` Rich Felker
@ 2015-02-13 16:38         ` Denys Vlasenko
  0 siblings, 0 replies; 7+ messages in thread
From: Denys Vlasenko @ 2015-02-13 16:38 UTC (permalink / raw)
  To: musl

On Fri, Feb 13, 2015 at 8:24 AM, Rich Felker <dalias@libc.org> wrote:
> On Thu, Feb 12, 2015 at 09:36:26PM +0100, Denys Vlasenko wrote:
> In practice it performs quite well. x86's are good at this. The
> generic C code in memset.c does not do any overlapping writes of
> different sizes for the short buffer code path -- all writes there are
> single-byte, and multiple-write only happens for some of the inner
> bytes depending on the value of n.

Measurements agree: I was not able to make it faster by doing it
differently.
I did find it possible to extend your trick to be usable for up to 30
byte blocks.
Sending the patch...


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-02-13 16:38 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-12 17:17 [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Denys Vlasenko
2015-02-12 17:17 ` [PATCH 2/2] x86_64/memset: align destination to 8 byte boundary Denys Vlasenko
2015-02-12 17:27 ` [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Rich Felker
2015-02-12 19:26   ` Denys Vlasenko
2015-02-12 20:36     ` Denys Vlasenko
2015-02-13  7:24       ` Rich Felker
2015-02-13 16:38         ` Denys Vlasenko

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).