mailing list of musl libc
 help / color / mirror / code / Atom feed
* [PATCH 1/2] x86_64/memset: simple optimizations
@ 2015-02-10 17:30 Denys Vlasenko
  2015-02-10 17:30 ` [PATCH 2/2] x86_64/memset: avoid prforming final store twice Denys Vlasenko
  2015-02-10 20:50 ` [PATCH 1/2] x86_64/memset: simple optimizations Rich Felker
  0 siblings, 2 replies; 9+ messages in thread
From: Denys Vlasenko @ 2015-02-10 17:30 UTC (permalink / raw)
  To: musl; +Cc: Denys Vlasenko

"and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use
4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead.

64-bit imul is slow, move it as far up as possible so that the result
(rax) has more time to be ready by the time we start using it
in mem stores.

There is no need to shuffle registers in preparation to "rep movs"
if we are not going to take that code path. Thus, patch moves
"jump if len < 16" instructions up, and changes alternate code path
to use rdx and rdi instead of rcx and r8.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
---
 src/string/x86_64/memset.s | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s
index fc06eef..263336b 100644
--- a/src/string/x86_64/memset.s
+++ b/src/string/x86_64/memset.s
@@ -1,41 +1,43 @@
 .global memset
 .type memset,@function
 memset:
-	and $0xff,%esi
+	movzbl %sil,%esi
 	mov $0x101010101010101,%rax
-	mov %rdx,%rcx
-	mov %rdi,%r8
+	# 64-bit imul has 3-7 cycles latency, launch early
 	imul %rsi,%rax
-	cmp $16,%rcx
+
+	cmp $16,%rdx
 	jb 1f
 
-	mov %rax,-8(%rdi,%rcx)
+	mov %rdx,%rcx
+	mov %rdi,%r8
 	shr $3,%rcx
+	mov %rax,-8(%rdi,%rdx)
 	rep
 	stosq
 	mov %r8,%rax
 	ret
 
-1:	test %ecx,%ecx
+1:	test %edx,%edx
 	jz 1f
 
 	mov %al,(%rdi)
-	mov %al,-1(%rdi,%rcx)
-	cmp $2,%ecx
+	mov %al,-1(%rdi,%rdx)
+	cmp $2,%edx
 	jbe 1f
 
 	mov %al,1(%rdi)
-	mov %al,-2(%rdi,%rcx)
-	cmp $4,%ecx
+	mov %al,-2(%rdi,%rdx)
+	cmp $4,%edx
 	jbe 1f
 
 	mov %eax,(%rdi)
-	mov %eax,-4(%rdi,%rcx)
-	cmp $8,%ecx
+	mov %eax,-4(%rdi,%rdx)
+	cmp $8,%edx
 	jbe 1f
 
 	mov %eax,4(%rdi)
-	mov %eax,-8(%rdi,%rcx)
+	mov %eax,-8(%rdi,%rdx)
 
-1:	mov %r8,%rax
+1:	mov %rdi,%rax
 	ret
-- 
1.8.1.4



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-02-11  1:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-10 17:30 [PATCH 1/2] x86_64/memset: simple optimizations Denys Vlasenko
2015-02-10 17:30 ` [PATCH 2/2] x86_64/memset: avoid prforming final store twice Denys Vlasenko
2015-02-10 20:50 ` [PATCH 1/2] x86_64/memset: simple optimizations Rich Felker
2015-02-10 21:08   ` Denys Vlasenko
2015-02-10 21:37     ` Rich Felker
2015-02-10 22:36       ` Rich Felker
2015-02-10 23:20         ` Rich Felker
2015-02-11  1:07       ` Denys Vlasenko
2015-02-11  1:21         ` Rich Felker

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/musl/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).