From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/7035 Path: news.gmane.org!not-for-mail From: Rich Felker Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH] x86_64/memset: use "small block" code for blocks up to 30 bytes long Date: Sat, 14 Feb 2015 14:35:33 -0500 Message-ID: <20150214193533.GK23507@brightrain.aerifal.cx> References: <1423845589-5920-1-git-send-email-vda.linux@googlemail.com> Reply-To: musl@lists.openwall.com NNTP-Posting-Host: plane.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Trace: ger.gmane.org 1423942568 22276 80.91.229.3 (14 Feb 2015 19:36:08 GMT) X-Complaints-To: usenet@ger.gmane.org NNTP-Posting-Date: Sat, 14 Feb 2015 19:36:08 +0000 (UTC) To: musl@lists.openwall.com Original-X-From: musl-return-7048-gllmg-musl=m.gmane.org@lists.openwall.com Sat Feb 14 20:35:54 2015 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by plane.gmane.org with smtp (Exim 4.69) (envelope-from ) id 1YMiVM-0000nC-3X for gllmg-musl@m.gmane.org; Sat, 14 Feb 2015 20:35:52 +0100 Original-Received: (qmail 32355 invoked by uid 550); 14 Feb 2015 19:35:49 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Original-Received: (qmail 32315 invoked from network); 14 Feb 2015 19:35:45 -0000 Content-Disposition: inline In-Reply-To: <1423845589-5920-1-git-send-email-vda.linux@googlemail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Original-Sender: Rich Felker Xref: news.gmane.org gmane.linux.lib.musl.general:7035 Archived-At: On Fri, Feb 13, 2015 at 05:39:49PM +0100, Denys Vlasenko wrote: > Before this change, we were using it only for 15-byte blocks and smaller. > Measurements on Sandy Bridge CPU show that "rep stosq" setup time > is high enough to dominate speed of fills well above that size: I just ran some tests with the latest three patches, including this one, and aside from the significant improvement for sizes 16-30 in this last patch, each patch makes various sizes 2-15 mildly slower. A few of them go from 9 cycles to 11 cycles average; most just increase by 1 cycle. And all sizes 8 and under are still slower than the C code. I don't think these small sizes are a typical deliberate usage case for memset, but even just a couple cycles is a large relative difference at such sizes, and they could arise in generic code (think something like qsort which uses memcpy possibly with small sizes, but something using memset instead) and make a significant performance difference. The main change whose value I really question is the conditional widen_rax. If the value isn't used until a few cycles after the imul instruction, doing it unconditionally is probably cheaper than testing and branching even when the branch is predictable. Rich