mailing list of musl libc
 help / color / mirror / code / Atom feed
* [PATCH 1/2] x86_64/memset: simple optimizations
@ 2015-02-10 17:30 Denys Vlasenko
  2015-02-10 17:30 ` [PATCH 2/2] x86_64/memset: avoid prforming final store twice Denys Vlasenko
  2015-02-10 20:50 ` [PATCH 1/2] x86_64/memset: simple optimizations Rich Felker
  0 siblings, 2 replies; 9+ messages in thread
From: Denys Vlasenko @ 2015-02-10 17:30 UTC (permalink / raw)
  To: musl; +Cc: Denys Vlasenko

"and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.

64-bit imul is slow, move it as far up as possible so that the result
(rax) has more time to be ready by the time we start using it
in mem stores.

There is no need to shuffle registers in preparation to "rep movs"
if we are not going to take that code path. Thus, patch moves
"jump if len < 16" instructions up, and changes alternate code path
to use rdx and rdi instead of rcx and r8.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
---
 src/string/x86_64/memset.s | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s
index fc06eef..263336b 100644
--- a/src/string/x86_64/memset.s
+++ b/src/string/x86_64/memset.s
@@ -1,41 +1,43 @@
 .global memset
 .type memset,@function
 memset:
-	and $0xff,%esi
+	movzbl %sil,%esi
 	mov $0x101010101010101,%rax
-	mov %rdx,%rcx
-	mov %rdi,%r8
+	# 64-bit imul has 3-7 cycles latency, launch early
 	imul %rsi,%rax
-	cmp $16,%rcx
+
+	cmp $16,%rdx
 	jb 1f
 
-	mov %rax,-8(%rdi,%rcx)
+	mov %rdx,%rcx
+	mov %rdi,%r8
 	shr $3,%rcx
+	mov %rax,-8(%rdi,%rdx)
 	rep
 	stosq
 	mov %r8,%rax
 	ret
 
-1:	test %ecx,%ecx
+1:	test %edx,%edx
 	jz 1f
 
 	mov %al,(%rdi)
-	mov %al,-1(%rdi,%rcx)
-	cmp $2,%ecx
+	mov %al,-1(%rdi,%rdx)
+	cmp $2,%edx
 	jbe 1f
 
 	mov %al,1(%rdi)
-	mov %al,-2(%rdi,%rcx)
-	cmp $4,%ecx
+	mov %al,-2(%rdi,%rdx)
+	cmp $4,%edx
 	jbe 1f
 
 	mov %eax,(%rdi)
-	mov %eax,-4(%rdi,%rcx)
-	cmp $8,%ecx
+	mov %eax,-4(%rdi,%rdx)
+	cmp $8,%edx
 	jbe 1f
 
 	mov %eax,4(%rdi)
-	mov %eax,-8(%rdi,%rcx)
+	mov %eax,-8(%rdi,%rdx)
 
-1:	mov %r8,%rax
+1:	mov %rdi,%rax
 	ret
-- 
1.8.1.4



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/2] x86_64/memset: avoid prforming final store twice
  2015-02-10 17:30 [PATCH 1/2] x86_64/memset: simple optimizations Denys Vlasenko
@ 2015-02-10 17:30 ` Denys Vlasenko
  2015-02-10 20:50 ` [PATCH 1/2] x86_64/memset: simple optimizations Rich Felker
  1 sibling, 0 replies; 9+ messages in thread
From: Denys Vlasenko @ 2015-02-10 17:30 UTC (permalink / raw)
  To: musl; +Cc: Denys Vlasenko

The code does a potentially misaligned 8-byte store to fill the tail
of the buffer. Then it fills the initial part of the buffer
which is a multiple of 8 bytes.
Therefore, if size is divisible by 8, we were storing last word twice.

This patch decrements byte count before dividing it by 8,
making one less store in "size is divisible by 8" case,
and not changing anything in all other cases.
All at the cost of replacing one MOV insn with LEA insn.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
---
 src/string/x86_64/memset.s | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s
index 263336b..3cc8fcf 100644
--- a/src/string/x86_64/memset.s
+++ b/src/string/x86_64/memset.s
@@ -9,7 +9,7 @@ memset:
 	cmp $16,%rdx
 	jb 1f
 
-	mov %rdx,%rcx
+	lea -1(%rdx),%rcx
 	mov %rdi,%r8
 	shr $3,%rcx
 	mov %rax,-8(%rdi,%rdx)
-- 
1.8.1.4



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-10 17:30 [PATCH 1/2] x86_64/memset: simple optimizations Denys Vlasenko
  2015-02-10 17:30 ` [PATCH 2/2] x86_64/memset: avoid prforming final store twice Denys Vlasenko
@ 2015-02-10 20:50 ` Rich Felker
  2015-02-10 21:08   ` Denys Vlasenko
  1 sibling, 1 reply; 9+ messages in thread
From: Rich Felker @ 2015-02-10 20:50 UTC (permalink / raw)
  To: musl

On Tue, Feb 10, 2015 at 06:30:56PM +0100, Denys Vlasenko wrote:
> "and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
> 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.
> [...]

Do you want to go ahead with these patches as-is, or consider some of
the other ideas we discussed off-list like avoiding the 64-bit imul
entirely in the small-n case? If you think that's easy as another
incremental change I'll go ahead with these, but if it's basically
going to be backing out these first changes and replacing them with
something different, I'm not sure if it makes sense to commit one then
the other.

Rich


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-10 20:50 ` [PATCH 1/2] x86_64/memset: simple optimizations Rich Felker
@ 2015-02-10 21:08   ` Denys Vlasenko
  2015-02-10 21:37     ` Rich Felker
  0 siblings, 1 reply; 9+ messages in thread
From: Denys Vlasenko @ 2015-02-10 21:08 UTC (permalink / raw)
  To: musl

On Tue, Feb 10, 2015 at 9:50 PM, Rich Felker <dalias@libc.org> wrote:
> On Tue, Feb 10, 2015 at 06:30:56PM +0100, Denys Vlasenko wrote:
>> "and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
>> 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.
>> [...]
>
> Do you want to go ahead with these patches as-is, or consider some of
> the other ideas we discussed off-list like avoiding the 64-bit imul
> entirely in the small-n case? If you think that's easy as another
> incremental change I'll go ahead with these

I think you can apply these patches without waiting
for potential future improvements.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-10 21:08   ` Denys Vlasenko
@ 2015-02-10 21:37     ` Rich Felker
  2015-02-10 22:36       ` Rich Felker
  2015-02-11  1:07       ` Denys Vlasenko
  0 siblings, 2 replies; 9+ messages in thread
From: Rich Felker @ 2015-02-10 21:37 UTC (permalink / raw)
  To: musl

On Tue, Feb 10, 2015 at 10:08:29PM +0100, Denys Vlasenko wrote:
> On Tue, Feb 10, 2015 at 9:50 PM, Rich Felker <dalias@libc.org> wrote:
> > On Tue, Feb 10, 2015 at 06:30:56PM +0100, Denys Vlasenko wrote:
> >> "and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
> >> 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.
> >> [...]
> >
> > Do you want to go ahead with these patches as-is, or consider some of
> > the other ideas we discussed off-list like avoiding the 64-bit imul
> > entirely in the small-n case? If you think that's easy as another
> > incremental change I'll go ahead with these
> 
> I think you can apply these patches without waiting
> for potential future improvements.

OK. Based on some casual testing on my Celeron 847:

- For small sizes, your patches make significant improvement, 20-30%.

- For rep stosq path, the improvement is minimal (roughly 1-2 cycles).

- Using 32-bit imul instead of 64-bit makes no difference at all.

I'll review the patches again for correctness, but so far they look
good, and it doesn't look like these are things we'd want to back out
or rewrite for subsequent improvements anyway.

Thanks!

Rich


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-10 21:37     ` Rich Felker
@ 2015-02-10 22:36       ` Rich Felker
  2015-02-10 23:20         ` Rich Felker
  2015-02-11  1:07       ` Denys Vlasenko
  1 sibling, 1 reply; 9+ messages in thread
From: Rich Felker @ 2015-02-10 22:36 UTC (permalink / raw)
  To: musl

On Tue, Feb 10, 2015 at 04:37:56PM -0500, Rich Felker wrote:
> On Tue, Feb 10, 2015 at 10:08:29PM +0100, Denys Vlasenko wrote:
> > On Tue, Feb 10, 2015 at 9:50 PM, Rich Felker <dalias@libc.org> wrote:
> > > On Tue, Feb 10, 2015 at 06:30:56PM +0100, Denys Vlasenko wrote:
> > >> "and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
> > >> 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.
> > >> [...]
> > >
> > > Do you want to go ahead with these patches as-is, or consider some of
> > > the other ideas we discussed off-list like avoiding the 64-bit imul
> > > entirely in the small-n case? If you think that's easy as another
> > > incremental change I'll go ahead with these
> > 
> > I think you can apply these patches without waiting
> > for potential future improvements.
> 
> OK. Based on some casual testing on my Celeron 847:
> 
> - For small sizes, your patches make significant improvement, 20-30%.
> 
> - For rep stosq path, the improvement is minimal (roughly 1-2 cycles).
> 
> - Using 32-bit imul instead of 64-bit makes no difference at all.
> 
> I'll review the patches again for correctness, but so far they look
> good, and it doesn't look like these are things we'd want to back out
> or rewrite for subsequent improvements anyway.
> 
> Thanks!

One more trivial change I might do: since the non-rep-stosq path is
faster for small sizes, changing the jb 1f to jbe 1f significantly
improves 16-byte memsets with no additional code changes.

Rich


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-10 22:36       ` Rich Felker
@ 2015-02-10 23:20         ` Rich Felker
  0 siblings, 0 replies; 9+ messages in thread
From: Rich Felker @ 2015-02-10 23:20 UTC (permalink / raw)
  To: musl

On Tue, Feb 10, 2015 at 05:36:48PM -0500, Rich Felker wrote:
> On Tue, Feb 10, 2015 at 04:37:56PM -0500, Rich Felker wrote:
> > On Tue, Feb 10, 2015 at 10:08:29PM +0100, Denys Vlasenko wrote:
> > > On Tue, Feb 10, 2015 at 9:50 PM, Rich Felker <dalias@libc.org> wrote:
> > > > On Tue, Feb 10, 2015 at 06:30:56PM +0100, Denys Vlasenko wrote:
> > > >> "and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
> > > >> 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.
> > > >> [...]
> > > >
> > > > Do you want to go ahead with these patches as-is, or consider some of
> > > > the other ideas we discussed off-list like avoiding the 64-bit imul
> > > > entirely in the small-n case? If you think that's easy as another
> > > > incremental change I'll go ahead with these
> > > 
> > > I think you can apply these patches without waiting
> > > for potential future improvements.
> > 
> > OK. Based on some casual testing on my Celeron 847:
> > 
> > - For small sizes, your patches make significant improvement, 20-30%.
> > 
> > - For rep stosq path, the improvement is minimal (roughly 1-2 cycles).
> > 
> > - Using 32-bit imul instead of 64-bit makes no difference at all.
> > 
> > I'll review the patches again for correctness, but so far they look
> > good, and it doesn't look like these are things we'd want to back out
> > or rewrite for subsequent improvements anyway.
> > 
> > Thanks!
> 
> One more trivial change I might do: since the non-rep-stosq path is
> faster for small sizes, changing the jb 1f to jbe 1f significantly
> improves 16-byte memsets with no additional code changes.

A few more observations:

- I think we shoul swap the order of the small/rep-stos code paths so
  that the small case doesn't branch. In practice this seems to save a
  few cycles and it makes sense that it should.

- The small path strategy seems to be optimal for sizes much larger
  than 16. We could easily extend it up to 32 with only one more
  branch step, but should probably extend it up to 64 or so.

- At very small sizes, memset.c beats the asm. It seems to be a result
  of deferring the movbzl/imul until larger-than-byte stores at
  needed. But once the size gets large performance is asymtotically
  half the speed of the asm (rep stosq).

These can all of course be done as separate patches later.

Also, similar changes should be considered for the i386 asm, and .sub
files should be added for x32 to use the x86_64 asm.

Rich


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-10 21:37     ` Rich Felker
  2015-02-10 22:36       ` Rich Felker
@ 2015-02-11  1:07       ` Denys Vlasenko
  2015-02-11  1:21         ` Rich Felker
  1 sibling, 1 reply; 9+ messages in thread
From: Denys Vlasenko @ 2015-02-11  1:07 UTC (permalink / raw)
  To: musl

On Tue, Feb 10, 2015 at 10:37 PM, Rich Felker <dalias@libc.org> wrote:
> OK. Based on some casual testing on my Celeron 847:
>
> - For small sizes, your patches make significant improvement, 20-30%.
>
> - For rep stosq path, the improvement is minimal (roughly 1-2 cycles).
>
> - Using 32-bit imul instead of 64-bit makes no difference at all.

That's because Celeron 847 is a Sandy Bridge CPU. Only Intel's "big"
CPUs starting from Nehalem have fast (and large in transistor count)
integer multiplier capable of 3-cycle 64-bit multiply.

Many other CPUs are worse, even Intel ones: Atoms are 13-cycle (!),
Silvermont: 5 cycles. AMD's Bulldozers: 6 cycles, Bobcat: 6-7, Jaguar:
6, K10: 4 cycles.

32-bit imul is 3 or 4 cycles on all these CPUs (well, Atom has 5).


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] x86_64/memset: simple optimizations
  2015-02-11  1:07       ` Denys Vlasenko
@ 2015-02-11  1:21         ` Rich Felker
  0 siblings, 0 replies; 9+ messages in thread
From: Rich Felker @ 2015-02-11  1:21 UTC (permalink / raw)
  To: Denys Vlasenko; +Cc: musl

On Wed, Feb 11, 2015 at 02:07:23AM +0100, Denys Vlasenko wrote:
> On Tue, Feb 10, 2015 at 10:37 PM, Rich Felker <dalias@libc.org> wrote:
> > OK. Based on some casual testing on my Celeron 847:
> >
> > - For small sizes, your patches make significant improvement, 20-30%.
> >
> > - For rep stosq path, the improvement is minimal (roughly 1-2 cycles).
> >
> > - Using 32-bit imul instead of 64-bit makes no difference at all.
> 
> That's because Celeron 847 is a Sandy Bridge CPU. Only Intel's "big"
> CPUs starting from Nehalem have fast (and large in transistor count)
> integer multiplier capable of 3-cycle 64-bit multiply.
> 
> Many other CPUs are worse, even Intel ones: Atoms are 13-cycle (!),
> Silvermont: 5 cycles. AMD's Bulldozers: 6 cycles, Bobcat: 6-7, Jaguar:
> 6, K10: 4 cycles.
> 
> 32-bit imul is 3 or 4 cycles on all these CPUs (well, Atom has 5).

Thanks for the info, and for the patches, which I just committed.

Rich


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-02-11  1:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-10 17:30 [PATCH 1/2] x86_64/memset: simple optimizations Denys Vlasenko
2015-02-10 17:30 ` [PATCH 2/2] x86_64/memset: avoid prforming final store twice Denys Vlasenko
2015-02-10 20:50 ` [PATCH 1/2] x86_64/memset: simple optimizations Rich Felker
2015-02-10 21:08   ` Denys Vlasenko
2015-02-10 21:37     ` Rich Felker
2015-02-10 22:36       ` Rich Felker
2015-02-10 23:20         ` Rich Felker
2015-02-11  1:07       ` Denys Vlasenko
2015-02-11  1:21         ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).