From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/7010 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH 1/2] x86_64/memset: avoid multiply insn if possible Date: Thu, 12 Feb 2015 12:27:35 -0500 Message-ID: <20150212172735.GX23507@brightrain.aerifal.cx> References: <1423761423-30050-1-git-send-email-vda.linux@googlemail.com> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1423762142 30394 80.91.229.3 (12 Feb 2015 17:29:02 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Thu, 12 Feb 2015 17:29:02 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-7023-gllmg-musl=m.gmane.org@lists.openwall.com Thu Feb 12 18:29:02 2015 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1YLxZN-0004i3-HP for gllmg-musl@m.gmane.org; Thu, 12 Feb 2015 18:28:53 +0100 Original-Received: (qmail 23926 invoked by uid 550); 12 Feb 2015 17:27:53 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 23598 invoked from network); 12 Feb 2015 17:27:48 -0000 Content-Disposition: inline In-Reply-To: <1423761423-30050-1-git-send-email-vda.linux@googlemail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:7010 Archived-At: On Thu, Feb 12, 2015 at 06:17:02PM +0100, Denys Vlasenko wrote: > memset is very, very often called with fill=0, > and 64-bit imul is expensive on many CPUs. > Avoid it if fill=0. > > Also avoid multiply on "short memset" codepath if possible, > and when we do need it, use 32-bit one, which is cheaper on many CPUs. I'd actually like to extend the "short" range up to at least 32 bytes using two 8-byte writes for the middle, unless the savings from using 32-bit imul instead of 64-bit are sufficient to justify 4 4-byte writes for the middle. On the cpu I tested on, the difference is 11 cycles vs 32 cycles for non-rep path versus rep path at size 32. > Signed-off-by: Denys Vlasenko > --- > src/string/x86_64/memset.s | 34 +++++++++++++++++++++------------- > 1 file changed, 21 insertions(+), 13 deletions(-) > > diff --git a/src/string/x86_64/memset.s b/src/string/x86_64/memset.s > index 3cc8fcf..523caa0 100644 > --- a/src/string/x86_64/memset.s > +++ b/src/string/x86_64/memset.s > @@ -1,13 +1,12 @@ > .global memset > .type memset,@function > memset: > - movzbl %sil,%esi > - mov $0x101010101010101,%rax > - # 64-bit imul has 3-7 cycles latency, launch early > - imul %rsi,%rax > - > + movzbq %sil,%rax > cmp $16,%rdx > - jb 1f > + jb .Less_than_16 > + test %esi,%esi > + jnz .L_widen_rax # unlikely > +.L_widened: Generally I prefer unnamed labels, but I think you've reached the level where the code is getting confusing without names, so I'm fine with having them here. Once you're done with improvements to the 64-bit version, do you plan to adapt the same principles to the 32-bit version? Or should I or someone else work on that later? Rich