From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/7040 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH] x86_64/memset: use "small block" code for blocks up to 30 bytes long Date: Sat, 14 Feb 2015 23:06:55 -0500 Message-ID: <20150215040655.GM23507@brightrain.aerifal.cx> References: <1423845589-5920-1-git-send-email-vda.linux@googlemail.com> <20150214193533.GK23507@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="Fig2xvG2VGoz8o/s" X-Trace: ger.gmane.org 1423973243 1605 80.91.229.3 (15 Feb 2015 04:07:23 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sun, 15 Feb 2015 04:07:23 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-7053-gllmg-musl=m.gmane.org@lists.openwall.com Sun Feb 15 05:07:23 2015 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1YMqUJ-0003wG-VS for gllmg-musl@m.gmane.org; Sun, 15 Feb 2015 05:07:20 +0100 Original-Received: (qmail 17731 invoked by uid 550); 15 Feb 2015 04:07:18 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 16347 invoked from network); 15 Feb 2015 04:07:08 -0000 Content-Disposition: inline In-Reply-To: <20150214193533.GK23507@brightrain.aerifal.cx> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:7040 Archived-At: --Fig2xvG2VGoz8o/s Content-Type: text/plain; charset=us-ascii Content-Disposition: inline On Sat, Feb 14, 2015 at 02:35:33PM -0500, Rich Felker wrote: > On Fri, Feb 13, 2015 at 05:39:49PM +0100, Denys Vlasenko wrote: > > Before this change, we were using it only for 15-byte blocks and smaller. > > Measurements on Sandy Bridge CPU show that "rep stosq" setup time > > is high enough to dominate speed of fills well above that size: > > I just ran some tests with the latest three patches, including this > one, and aside from the significant improvement for sizes 16-30 in > this last patch, each patch makes various sizes 2-15 mildly slower. A > few of them go from 9 cycles to 11 cycles average; most just increase > by 1 cycle. And all sizes 8 and under are still slower than the C > code. > > I don't think these small sizes are a typical deliberate usage case > for memset, but even just a couple cycles is a large relative > difference at such sizes, and they could arise in generic code (think > something like qsort which uses memcpy possibly with small sizes, but > something using memset instead) and make a significant performance > difference. > > The main change whose value I really question is the conditional > widen_rax. If the value isn't used until a few cycles after the imul > instruction, doing it unconditionally is probably cheaper than testing > and branching even when the branch is predictable. To elaborate, simply replacing the unconditional imul with an unconditional xor %eax,%eax in my best variant so far, I was only able to save one cycle. So I don't see any way a test, branch, and conditional imul could be less expensive than the unconditional imul. I'm attaching my current draft based on the ideas so far in this thread. See how it compares to your version(s) in the timing tests you're using/on your hardware. Rich --Fig2xvG2VGoz8o/s Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="my2.s" .global memset .type memset,@function memset: movzbq %sil,%rax mov $0x101010101010101,%r8 imul %r8,%rax lea -1(%rdx),%rcx cmp $126,%rcx jae 2f mov %sil,(%rdi) mov %sil,-1(%rdi,%rdx) cmp $2,%edx jbe 1f mov %ax,1(%rdi) mov %ax,(-1-2)(%rdi,%rdx) cmp $6,%edx jbe 1f mov %eax,(1+2)(%rdi) mov %eax,(-1-2-4)(%rdi,%rdx) cmp $14,%edx jbe 1f mov %rax,(1+2+4)(%rdi) mov %rax,(-1-2-4-8)(%rdi,%rdx) cmp $30,%edx jbe 1f mov %rax,(1+2+4+8)(%rdi) mov %rax,(1+2+4+8+8)(%rdi) mov %rax,(-1-2-4-8-16)(%rdi,%rdx) mov %rax,(-1-2-4-8-8)(%rdi,%rdx) cmp $62,%edx jbe 1f mov %rax,(1+2+4+8+16)(%rdi) mov %rax,(1+2+4+8+16+8)(%rdi) mov %rax,(1+2+4+8+16+16)(%rdi) mov %rax,(1+2+4+8+16+24)(%rdi) mov %rax,(-1-2-4-8-16-32)(%rdi,%rdx) mov %rax,(-1-2-4-8-16-24)(%rdi,%rdx) mov %rax,(-1-2-4-8-16-16)(%rdi,%rdx) mov %rax,(-1-2-4-8-16-8)(%rdi,%rdx) 1: mov %rdi,%rax ret 2: test %rdx,%rdx jz 1b mov %rdi,%r8 mov %rax,(%rdi) mov %rax,-16(%rdi,%rdx) mov %rax,-8(%rdi,%rdx) add $8,%rdi sub $8,%rcx and $-8,%rdi shr $3,%rcx rep stosq mov %r8,%rax ret --Fig2xvG2VGoz8o/s--