From mboxrd@z Thu Jan 1 00:00:00 1970 X-Msuck: nntp://news.gmane.org/gmane.linux.lib.musl.general/13692 Path: news.gmane.org!.POSTED.blaine.gmane.org!not-for-mail From: Szabolcs Nagy Newsgroups: gmane.linux.lib.musl.general Subject: Re: Possible Mistype in exp.c Date: Wed, 30 Jan 2019 13:19:47 +0100 Message-ID: <20190130121947.GF21289@port70.net> References: <20190129110135.GC21289@port70.net> <20190129114308.GD21289@port70.net> <20190130093738.GE21289@port70.net> Reply-To: musl@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Injection-Info: blaine.gmane.org; posting-host="blaine.gmane.org:195.159.176.226"; logging-data="125559"; mail-complaints-to="usenet@blaine.gmane.org" User-Agent: Mutt/1.10.1 (2018-07-13) To: musl@lists.openwall.com Original-X-From: musl-return-13708-gllmg-musl=m.gmane.org@lists.openwall.com Wed Jan 30 13:20:05 2019 Return-path: Envelope-to: gllmg-musl@m.gmane.org Original-Received: from mother.openwall.net ([195.42.179.200]) by blaine.gmane.org with smtp (Exim 4.89) (envelope-from ) id 1gooqY-000WRn-St for gllmg-musl@m.gmane.org; Wed, 30 Jan 2019 13:20:02 +0100 Original-Received: (qmail 22077 invoked by uid 550); 30 Jan 2019 12:20:00 -0000 Mailing-List: contact musl-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Original-Received: (qmail 22058 invoked from network); 30 Jan 2019 12:19:59 -0000 Mail-Followup-To: musl@lists.openwall.com Content-Disposition: inline In-Reply-To: Xref: news.gmane.org gmane.linux.lib.musl.general:13692 Archived-At: * Damian McGuckin [2019-01-30 22:14:01 +1100]: > On Wed, 30 Jan 2019, Szabolcs Nagy wrote: > > there is not much published about the algorithms (although there aren't > > many new ideas in the code, > > Sadly, a lot of the existing ideas are not documented well. Even the texts > on this topic seem to be lacking all the information to actually implement a > fast low-level routine. They cover fundamentals well and important topics > like error analysis but omit the basic techniques and little details and > basic techniques that are needed to make it fast and still achieve that > 1*ULP (or better) error bound. i found "Jean-Michel Muller - Elementary Functions" useful for the math. for < 1 ulp the main trick is computing the result as x+y where x is exact and y is inexact but much smaller than x in magnitude. (this is the only way to get worst case error < 1 ulp with an fp operation that does a rounding and at least one of the inputs is inexact.) the bigger the distance between x and y in magnitude, the closer you get to 0.5 ulp (rounding error of the final add), and to get exact x you sometimes need exact add and mul algorithms. > > it's mostly about optimizing for modern cpus), but i plan to at least > > put the polynomial and lookup table generation code in that repo. > > Thanks. For some reason, I have this aversion to table lookup because > suddenly there are lots more numbers hanging about. But, it certainly > helps to achieve optimal performance. I had a look at the GLIBC exp.c > which comes out of the Ultimate Libray done by IBM and I find that > inpossible to read/comprehend especially with its table lookups. table lookup is needed 1) to make the approximation efficient: the main reason why fdlibm is slow is that it prefers using div or large polynomials to table lookups. (the other reason is that it uses a long chain of dependency between operations, while modern cpus can evaluate many operations in parallel if there are no dependencies between them) 2) and to get close to 0.5 ulp errors: usually there is a polynomial evaluation like c0 + c1*x + ... where c1*x must be much smaller than c0 to get close to 0.5 ulp error, which requires reducing x into a very small range around 0, which usually requires a table lookup (otherwise several terms in the polynomial would need to be evaluated with multi-precision tricks which is a lot of operations). > > the polynomial coefficients were generated with the sollya tool from > > inria. > > Yes, nice tool. What range reduction is used for those routines and why are > there several polynomials? Just curious. the comment in the code says /* exp(x) = 2^(k/N) * exp(r), with exp(r) in [2^(-1/2N),2^(1/2N)]. */ /* x = ln2/N*k + r, with int k and r in [-ln2/2N, ln2/2N]. */ so k = toint(N/ln2*x); r = x - ln2/N*k; to compute 2^(k/N) a lookup table is used (N is the table size), and exp(r) is a polynomial. the different polynomials were computed for different pecision and table size requirements and because some targets lack efficient toint in non-nearest rounding mode so the reduction interval is larger. in the end the same polynomial is used in glibc, bionic, newlib and musl (where the algorithms were upstreamed) and the polynomial error is small enough in the wider intervall so the non-nearest toint never gives more than 1 ulp error in non-nearest rounding mode.