From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/14711 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Szabolcs Nagy Newsgroups: gmane.linux.lib.musl.general Subject: Re: [PATCH] math: optimize lrint on 32bit targets Date: Mon, 23 Sep 2019 20:38:19 +0200 Message-ID: <20190923183818.GE22009@port70.net> References: <20190921155234.GA22009@port70.net> <20190922204335.GC22009@port70.net> <20190923174029.GN9017@brightrain.aerifal.cx> Reply-To: musl@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="245457"; mail-complaints-to="usenet@blaine.gmane.org" User-Agent: Mutt/1.10.1 (2018-07-13) To: musl@lists.openwall.com Original-X-From: musl-return-14727-gllmg-musl=m.gmane.org@lists.openwall.com Mon Sep 23 20:38:33 2019 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1iCTEH-0011lj-Ad for gllmg-musl@m.gmane.org; Mon, 23 Sep 2019 20:38:33 +0200 Original-Received: (qmail 10159 invoked by uid 550); 23 Sep 2019 18:38:30 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 10141 invoked from network); 23 Sep 2019 18:38:30 -0000 Mail-Followup-To: musl@lists.openwall.com Content-Disposition: inline In-Reply-To: <20190923174029.GN9017@brightrain.aerifal.cx> Xref: news.gmane.org gmane.linux.lib.musl.general:14711 Archived-At: * Rich Felker [2019-09-23 13:40:29 -0400]: > On Sun, Sep 22, 2019 at 10:43:35PM +0200, Szabolcs Nagy wrote: > > +long lrint(double x) > > +{ > > + uint32_t abstop = asuint64(x)>>32 & 0x7fffffff; > > + uint64_t sign = asuint64(x) & (1ULL << 63); > > + > > + if (abstop < 0x41dfffff) { > > + /* |x| < 0x7ffffc00, no overflow */ > > + double_t toint = asdouble(asuint64(1/EPS) | sign); > > + double_t y = x + toint - toint; > > + return (long)y; > > + } > > + return lrint_slow(x); > > +} > > #else > > long lrint(double x) > > { > > This code should be considerably faster than calling rint on 64-bit > archs too, no? I wonder if it should be something like (untested, > written inline here): yeah i'd expect it to be a bit faster, but e.g. a target may prefer sign?-1/EPS:1/EPS to 1/EPS|sign, and you need a threshold check even if there is no inexact overflow issue: > long lrint(double x) > { > uint32_t abstop = asuint64(x)>>32 & 0x7fffffff; > uint64_t sign = asuint64(x) & (1ULL << 63); > > #if LONG_MAX < 1U<<53 && defined(FE_INEXACT) > if (abstop >= 0x41dfffff) return lrint_slow(x); #else if (abstop >= 0x43300000) return (long)x; /* |x| < 2^52 <= 1/EPS */ > #endif > /* |x| < 0x7ffffc00, no overflow */ > double_t toint = asdouble(asuint64(1/EPS) | sign); > double_t y = x + toint - toint; > return (long)y; > } i can try to benchmark this (although on x86_64 and aarch64 there is single instruction lrint so i can only benchmark machines where this is not relevant).