```mailing list of musl libc
help / color / mirror / code / Atom feed```
```* [musl] Why thousands of digits in floatscan.c?
@ 2022-10-09  8:45 Markus Wichmann
2022-10-09 10:35 ` Szabolcs Nagy
From: Markus Wichmann @ 2022-10-09  8:45 UTC (permalink / raw)
To: musl

Hi all,

so I recently read this wonderful paper: https://dl.acm.org/doi/10.1145/103162.103163

Great read, I encourage it for everyone, even though some things are not
directly applicable to the C application programmer.

Anyway, towards the end of the paper, Goldberg makes the argument that a
9 digit decimal number is always enough to uniquely identify a
single-precision number. Generalizing out from his argument, I
calculated that 36 decimal digits are needed for a quad-precision number
(the digit count for a w bit mantissa is roundup(w * lg 2) + 1, which is
9 for single-prec, 17 for double-prec, 21 for double-extended, and 33
for double-double).

Now, I looked again at floatscan.c. The algorithm in decfloat() mostly
consists of reading all the decimal digits into a base 1 billion number,
which could have any length at all, then aligning the radix point with a
word boundary, and then shifting that number such that exactly one
mantissa worth of bits are left of the radix point. For the most part,
the algorithm does not care about the remaining bits except for two bits
of information: If the tail is equal to 0, or else how it compares to
0.5. So now I am left wondering why then, if only the first
LDBL_MANT_DIG bits of the number, plus two more for the tail, ever count
for anything, the algorithm makes room for thousands of decimal digits.
Would it not be enough to save a few dozen? What purpose do the far-back
digits have?

To be clear, I am totally aware of why there needs to be room for
thousands of digits in fmt_fp(): All of those digits are in the number
and could be requested. The smallest representable positive number on
the PC is 2^-16447 (that is, it has an exponent of -16384 and a
mantissa of 2^-63), and that is a number with 16447 decimal places
(though the first 4950 of those are 0 and don't need to be saved
explicitly, but then you'd need wrap-around semantics there as well).

Ciao,
Markus

```* Re: [musl] Why thousands of digits in floatscan.c?
2022-10-09  8:45 [musl] Why thousands of digits in floatscan.c? Markus Wichmann
@ 2022-10-09 10:35 ` Szabolcs Nagy
2022-10-11 17:46   ` Rich Felker
From: Szabolcs Nagy @ 2022-10-09 10:35 UTC (permalink / raw)
To: Markus Wichmann; +Cc: musl

* Markus Wichmann <nullplan@gmx.net> [2022-10-09 10:45:08 +0200]:
> Hi all,
>
> so I recently read this wonderful paper: https://dl.acm.org/doi/10.1145/103162.103163
>
> Great read, I encourage it for everyone, even though some things are not
> directly applicable to the C application programmer.
>
> Anyway, towards the end of the paper, Goldberg makes the argument that a
> 9 digit decimal number is always enough to uniquely identify a
> single-precision number. Generalizing out from his argument, I
> calculated that 36 decimal digits are needed for a quad-precision number
> (the digit count for a w bit mantissa is roundup(w * lg 2) + 1, which is
> 9 for single-prec, 17 for double-prec, 21 for double-extended, and 33
> for double-double).

this is {FLT,DBL,LDBL}_DECIMAL_DIG

>
> Now, I looked again at floatscan.c. The algorithm in decfloat() mostly
> consists of reading all the decimal digits into a base 1 billion number,
> which could have any length at all, then aligning the radix point with a
> word boundary, and then shifting that number such that exactly one
> mantissa worth of bits are left of the radix point. For the most part,
> the algorithm does not care about the remaining bits except for two bits
> of information: If the tail is equal to 0, or else how it compares to
> 0.5. So now I am left wondering why then, if only the first
> LDBL_MANT_DIG bits of the number, plus two more for the tail, ever count
> for anything, the algorithm makes room for thousands of decimal digits.
> Would it not be enough to save a few dozen? What purpose do the far-back
> digits have?

*_DECIMAL_DIG is enough if we know the digits are the decimal
conversion of an actual fp number of the right type and right
rounding mode.

but e.g. it can be the digits of a number halfway between two fp
numbers exactly. that sequence of digits would never occur when
an actual fp number is printed but it is still a valid input and
has to be read to the end to decide which way to round. (assuming
nearest rounding mode.)

the worst case is halfway between {FLT,DBL,LDBL}_TRUE_MIN and 0.
and the digits of that would be

2^{-150,-1075,-16495} = 5^{150,..} * 10^{-150,..}

i.e. 0. followed by many zeros and then the digits of 5^16495.
depending on the last digit it has to be rounded up or down.

>
> To be clear, I am totally aware of why there needs to be room for
> thousands of digits in fmt_fp(): All of those digits are in the number
> and could be requested. The smallest representable positive number on
> the PC is 2^-16447 (that is, it has an exponent of -16384 and a
> mantissa of 2^-63), and that is a number with 16447 decimal places
> (though the first 4950 of those are 0 and don't need to be saved
> explicitly, but then you'd need wrap-around semantics there as well).
>
> Ciao,
> Markus

```* Re: [musl] Why thousands of digits in floatscan.c?
2022-10-09 10:35 ` Szabolcs Nagy
@ 2022-10-11 17:46   ` Rich Felker
0 siblings, 0 replies; 3+ messages in thread
From: Rich Felker @ 2022-10-11 17:46 UTC (permalink / raw)
To: Markus Wichmann, musl

On Sun, Oct 09, 2022 at 12:35:50PM +0200, Szabolcs Nagy wrote:
> * Markus Wichmann <nullplan@gmx.net> [2022-10-09 10:45:08 +0200]:
> > Hi all,
> >
> > so I recently read this wonderful paper: https://dl.acm.org/doi/10.1145/103162.103163
> >
> > Great read, I encourage it for everyone, even though some things are not
> > directly applicable to the C application programmer.
> >
> > Anyway, towards the end of the paper, Goldberg makes the argument that a
> > 9 digit decimal number is always enough to uniquely identify a
> > single-precision number. Generalizing out from his argument, I
> > calculated that 36 decimal digits are needed for a quad-precision number
> > (the digit count for a w bit mantissa is roundup(w * lg 2) + 1, which is
> > 9 for single-prec, 17 for double-prec, 21 for double-extended, and 33
> > for double-double).
>
> this is {FLT,DBL,LDBL}_DECIMAL_DIG
>
> >
> > Now, I looked again at floatscan.c. The algorithm in decfloat() mostly
> > consists of reading all the decimal digits into a base 1 billion number,
> > which could have any length at all, then aligning the radix point with a
> > word boundary, and then shifting that number such that exactly one
> > mantissa worth of bits are left of the radix point. For the most part,
> > the algorithm does not care about the remaining bits except for two bits
> > of information: If the tail is equal to 0, or else how it compares to
> > 0.5. So now I am left wondering why then, if only the first
> > LDBL_MANT_DIG bits of the number, plus two more for the tail, ever count
> > for anything, the algorithm makes room for thousands of decimal digits.
> > Would it not be enough to save a few dozen? What purpose do the far-back
> > digits have?
>
> *_DECIMAL_DIG is enough if we know the digits are the decimal
> conversion of an actual fp number of the right type and right
> rounding mode.
>
> but e.g. it can be the digits of a number halfway between two fp
> numbers exactly. that sequence of digits would never occur when
> an actual fp number is printed but it is still a valid input and
> has to be read to the end to decide which way to round. (assuming
> nearest rounding mode.)
>
> the worst case is halfway between {FLT,DBL,LDBL}_TRUE_MIN and 0.
> and the digits of that would be
>
>  2^{-150,-1075,-16495} = 5^{150,..} * 10^{-150,..}
>
> i.e. 0. followed by many zeros and then the digits of 5^16495.
> depending on the last digit it has to be rounded up or down.

Yep, this.

Also, AFAICT it's impossible to tell before you know the exponent at
what point you might be able to discard further mantissa digits and
still get the right answer. There may be some shortcuts for this but
it's non-obvious, and if you need to buffer all the mantissa digits
anyway, the b1b representation is about the most efficient you can
make it.

Further, musl's floatscan raises FE_INEXACT exactly when the
conversion from a decimal string to binary floating point is inexact.
I don't think this is a requirement but it's a desirable property and
comes nearly free if you do the above right.

Rich

```end of thread, other threads:[~2022-10-11 17:47 UTC | newest]

```Code repositories for project(s) associated with this public inbox