Hi everyone, I'm writing testcases for complex math function. Considering the following simple code #include <stdio.h> #include <complex.h> int main(int argc, char *argv[]) { double complex d = 3.0+6.6*I; double complex ret = casinh(d); printf("casinh(3.0+6.6*I)=%.15f+%.15f*I\n", creal(ret), cimag(ret)); return 0; } With musl libc, the result is: casinh(3.0+6.6*I)=2.671002221994648+1.140551372972568*I but with glibc, the result is: casinh(3.0+6.6*I)=2.671002221994652+1.140551372972565*I We can see that musl is less accurate. I'm trying to solve this problem. With little knowledge of numerical computing, any comment is appreciated! Thank you in advance! BR, Song Yue

```
* mg1633068 <songyue@smail.nju.edu.cn> [2020-02-11 23:51:51 +0800]:
> Hi everyone,
> I'm writing testcases for complex math function. Considering the following simple code
>
> #include <stdio.h>
> #include <complex.h>
> int main(int argc, char *argv[])
> {
> double complex d = 3.0+6.6*I;
> double complex ret = casinh(d);
> printf("casinh(3.0+6.6*I)=%.15f+%.15f*I\n", creal(ret), cimag(ret));
>
> return 0;
> }
>
> With musl libc, the result is:
> casinh(3.0+6.6*I)=2.671002221994648+1.140551372972568*I
> but with glibc, the result is:
> casinh(3.0+6.6*I)=2.671002221994652+1.140551372972565*I
>
> We can see that musl is less accurate. I'm trying to solve this problem.
> With little knowledge of numerical computing, any comment is appreciated!
do you mean you are trying to fix the code in musl?
that's welcome, but i think it will be hard without
numerical computing knowledge.
several complex math functions in musl are not
correct (implemented in a very naive way), but
fixing them is significant effort.
in this particular case the 8 and 12 ulp errors
on the real and imaginary parts are still
considered small errors (glibc has 1 and 2 ulp
errors compared to the correctly rounded result).
```

```
On Tue, Feb 11, 2020 at 08:30:59PM +0100, Szabolcs Nagy wrote:
> * mg1633068 <songyue@smail.nju.edu.cn> [2020-02-11 23:51:51 +0800]:
> > Hi everyone,
> > I'm writing testcases for complex math function. Considering the following simple code
> >
> > #include <stdio.h>
> > #include <complex.h>
> > int main(int argc, char *argv[])
> > {
> > double complex d = 3.0+6.6*I;
> > double complex ret = casinh(d);
> > printf("casinh(3.0+6.6*I)=%.15f+%.15f*I\n", creal(ret), cimag(ret));
> >
> > return 0;
> > }
> >
> > With musl libc, the result is:
> > casinh(3.0+6.6*I)=2.671002221994648+1.140551372972568*I
> > but with glibc, the result is:
> > casinh(3.0+6.6*I)=2.671002221994652+1.140551372972565*I
> >
> > We can see that musl is less accurate. I'm trying to solve this problem.
> > With little knowledge of numerical computing, any comment is appreciated!
>
> do you mean you are trying to fix the code in musl?
>
> that's welcome, but i think it will be hard without
> numerical computing knowledge.
>
> several complex math functions in musl are not
> correct (implemented in a very naive way), but
> fixing them is significant effort.
>
> in this particular case the 8 and 12 ulp errors
> on the real and imaginary parts are still
> considered small errors (glibc has 1 and 2 ulp
> errors compared to the correctly rounded result).
Indeed, doing much better than musl's complex functions do now looks
like it would require a good deal of numerical analysis. The current
ones are based pretty much entirely on identities that are true on the
complex numbers, but that introduce additional error at each step when
used on floating point representations. I'm kinda surprised they do as
well as they do now.
On targets with a long double type having more precision than double,
you might be able to cheat and call the long double versions instead,
then drop to double in the result, to get a few more places of
precision. But that won't help on targets where ld==double.
High-quality complex math functions are a long-term wishlist item for
musl but nobody has stepped up to do them and I don't really feel like
doing it, at least not over other improvements I could be working on.
This might be an area well-served by sponsored enhancement if there's
a user who needs them improved with resources to pay someone to do it.
Rich
```

On Tue, 11 Feb 2020, Rich Felker wrote: > High-quality complex math functions are a long-term wishlist item for > musl but nobody has stepped up to do them and I don't really feel like > doing it, at least not over other improvements I could be working on. As you say, they do work reasonably well even now. With even complex arithmetic being "unspecified" in IEEE-754-2019, it is an interesting issue where "interesting" does not have the conventional English sense of being a positive thing. > This might be an area well-served by sponsored enhancement if there's > a user who needs them improved with resources to pay someone to do it. Depending on your definition of "improved", it is anything from a big task to a huge task. And sadly, I have other things on my plate too. Mc 2c - Damian

```
On Wed, Feb 12, 2020 at 11:46:08AM +1100, Damian McGuckin wrote:
> On Tue, 11 Feb 2020, Rich Felker wrote:
>
> >High-quality complex math functions are a long-term wishlist item for
> >musl but nobody has stepped up to do them and I don't really feel like
> >doing it, at least not over other improvements I could be working on.
>
> As you say, they do work reasonably well even now.
>
> With even complex arithmetic being "unspecified" in IEEE-754-2019,
> it is an interesting issue where "interesting" does not have the
> conventional English sense of being a positive thing.
>
> >This might be an area well-served by sponsored enhancement if there's
> >a user who needs them improved with resources to pay someone to do it.
>
> Depending on your definition of "improved", it is anything from a
> big task to a huge task.
>
> And sadly, I have other things on my plate too.
My minimal criterion for large-scale improvements of src/complex would
be fixing any remaining cases where inf/nan behavior is badly wrong or
there's catastrophic error (>2^52 ulp, or even just >2^20 ulp or so).
Beyond that, I think "reducing ulp error" would be nice but hard to
quantify and make a goal without having an idea how bad it is now, not
to mention without having rigorous error bounds on the real math
library functions.
Rich
```

```
On Tue, 11 Feb 2020, Rich Felker wrote:
> My minimal criterion for large-scale improvements of src/complex would
> be fixing any remaining cases where inf/nan behavior is badly wrong or
> there's catastrophic error (>2^52 ulp, or even just >2^20 ulp or so).
> Beyond that, I think "reducing ulp error" would be nice but hard to
> quantify and make a goal without having an idea how bad it is now, not
> to mention without having rigorous error bounds on the real math
> library functions.
I think INF/NaN behaviour at the fundamental level is flawed.
This initialization:
double complex x = 1.0e+200 + INFINITY * I;
on every compiler I try, yields an 'x' of
NaN + INFINITY i
Whereas if I compute
double complex a = 2.0 + 1.0e200 * I;
double complex b = 1.0e200 + 1.0 * I;
double x = a * b;
then 'x' prints correctly as
1.0e+200 + INF * I;
Regards - Damian
Pacific Engineering Systems International, 277-279 Broadway, Glebe NSW 2037
Ph:+61-2-8571-0847 .. Fx:+61-2-9692-9623 | unsolicited email not wanted here
Views & opinions here are mine and not those of any past or present employer
```

On Wed, Feb 12, 2020 at 03:07:17PM +1100, Damian McGuckin wrote: > On Tue, 11 Feb 2020, Rich Felker wrote: > > >My minimal criterion for large-scale improvements of src/complex would > >be fixing any remaining cases where inf/nan behavior is badly wrong or > >there's catastrophic error (>2^52 ulp, or even just >2^20 ulp or so). > >Beyond that, I think "reducing ulp error" would be nice but hard to > >quantify and make a goal without having an idea how bad it is now, not > >to mention without having rigorous error bounds on the real math > >library functions. > > I think INF/NaN behaviour at the fundamental level is flawed. > > This initialization: > > double complex x = 1.0e+200 + INFINITY * I; > > on every compiler I try, yields an 'x' of > > NaN + INFINITY i This is expected unless the implementation supports pure-imaginary types and a _Imaginary_I (as opposed to _Complex_I). Note that Annex G requires this, but GCC treats it as optional and does not support it despite claiming Annex G conformance. In this particular kind of case, the right solution is to use the CMLPX macro instead of the * operator with I: double complex x = CMPLX(1.0e+200, INFINITY); > Whereas if I compute > > double complex a = 2.0 + 1.0e200 * I; > double complex b = 1.0e200 + 1.0 * I; > double x = a * b; > > then 'x' prints correctly as > > 1.0e+200 + INF * I; Indeed the compiler is required to handle these overflow cases unless #pragma STDC CX_LIMITED_RANGE ON is in effect. I don't know if GCC honors the pragma but it has -fcx-limited-range to set it globally. Note that without CX_LIMITED_RANGE, complex math performance is atrociously bad due to extensive overflow checking. Rich