Hello, Can the "musl" libc project consider supporting the Posit number format in the math routines? More details; https://posithub.org/docs/Posits4.pdf https://posithub.org/docs/BeatingFloatingPoint.pdf And a sample implementation; https://gitlab.com/cerlane/SoftPosit Best regards, ~Mayuresh
Hello, > Can the "musl" libc project consider supporting the Posit number format in the math routines? > More details; > https://posithub.org/docs/Posits4.pdf > https://posithub.org/docs/BeatingFloatingPoint.pdf > And a sample implementation; > https://gitlab.com/cerlane/SoftPosit I am not a musl contributor and have no say in what it should contain or not, but why in hell a software implementation of a non-standard floating-point format that only its inventor seems to think has any concrete advantage over IEEE 754 belong in a libc the goals of which are below? “lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety.” (from https://musl.libc.org/ ) Posits are 1 out of 5 (I think they are free). Pascal
On Monday, June 29, 2020 05:34 PM IST, Pascal Cuoq <cuoq@trust-in-soft.com> wrote:
> > Can the "musl" libc project consider supporting the Posit number format in the math routines?
>
> > More details;
> > https://posithub.org/docs/Posits4.pdf
> > https://posithub.org/docs/BeatingFloatingPoint.pdf
>
> > And a sample implementation;
> > https://gitlab.com/cerlane/SoftPosit
>
> I am not a musl contributor and have no say in what it should contain or not, but why in hell a software implementation of a non-standard floating-point format that only its inventor seems to think has any concrete advantage over IEEE 754 belong in a libc the goals of which are below?
>
> “lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety.” (from https://musl.libc.org/ )
>
> Posits are 1 out of 5 (I think they are free).
Posits are lightweight, fast, free and produce the same results across platforms, something which IEEE 754 doesn't guarantee. To top that, IEEE 754 isn't even a standard but just a set of guidelines which are usually implemented incorrectly due to misinterpretation or lack of expertise. So in that sense, Posits are safer than Floating-point.
That makes Posits, 4 out of 5 (which seems a much better proposition).
~Mayuresh
* mayuresh@kathe.in <mayuresh@kathe.in> [2020-06-29 14:56:09 +0200]:
> On Monday, June 29, 2020 05:34 PM IST, Pascal Cuoq <cuoq@trust-in-soft.com> wrote:
>
> > > Can the "musl" libc project consider supporting the Posit number format in the math routines?
> >
> > > More details;
> > > https://posithub.org/docs/Posits4.pdf
> > > https://posithub.org/docs/BeatingFloatingPoint.pdf
> >
> > > And a sample implementation;
> > > https://gitlab.com/cerlane/SoftPosit
> >
> > I am not a musl contributor and have no say in what it should contain or not, but why in hell a software implementation of a non-standard floating-point format that only its inventor seems to think has any concrete advantage over IEEE 754 belong in a libc the goals of which are below?
> >
> > “lightweight, fast, simple, free, and strives to be correct in the sense of standards-conformance and safety.” (from https://musl.libc.org/ )
> >
> > Posits are 1 out of 5 (I think they are free).
>
> Posits are lightweight, fast, free and produce the same results across platforms, something which IEEE 754 doesn't guarantee. To top that, IEEE 754 isn't even a standard but just a set of guidelines which are usually implemented incorrectly due to misinterpretation or lack of expertise. So in that sense, Posits are safer than Floating-point.
>
> That makes Posits, 4 out of 5 (which seems a much better proposition).
i would not hold my breath for posit support even if it was the
best possible floating-point format.
it has to be properly standardized and added to hw architectures.
then the related software standards need to be developed (abi,
programming language support, math library behaviour for special
cases, printf format specifiers, etc)
then the tooling support has to be added (compilers, emulators,
softfloat libraries, etc)
then we can come back and consider doing something about it in
musl.
(and even then it will take time for it to be usable in user
code: requires widely deployed hw, protocol and file format
updates, new algorithm designs and review of existing algorithms
for compatibility)
On Mon, Jun 29, 2020 at 06:26:42PM +0200, Szabolcs Nagy wrote:
> i would not hold my breath for posit support even if it was the
> best possible floating-point format.
>
> it has to be properly standardized and added to hw architectures.
>
> then the related software standards need to be developed (abi,
> programming language support, math library behaviour for special
> cases, printf format specifiers, etc)
>
> then the tooling support has to be added (compilers, emulators,
> softfloat libraries, etc)
>
> then we can come back and consider doing something about it in
> musl.
>
> (and even then it will take time for it to be usable in user
> code: requires widely deployed hw, protocol and file format
> updates, new algorithm designs and review of existing algorithms
> for compatibility)
In addition, Posit support is going to run into the problem that IEEE
754 implementations are readily available /right now/ and are "good
enough" for most applications. Hell, most applications don't even
require floating-point at all. The good can sometimes be the enemy of
the perfect, but here I am squarely on the side of pragmatism.
Ciao,
Markus
On Mon, Jun 29, 2020 at 08:15:40PM +0200, Markus Wichmann wrote:
> On Mon, Jun 29, 2020 at 06:26:42PM +0200, Szabolcs Nagy wrote:
> > i would not hold my breath for posit support even if it was the
> > best possible floating-point format.
> >
> > it has to be properly standardized and added to hw architectures.
> >
> > then the related software standards need to be developed (abi,
> > programming language support, math library behaviour for special
> > cases, printf format specifiers, etc)
> >
> > then the tooling support has to be added (compilers, emulators,
> > softfloat libraries, etc)
> >
> > then we can come back and consider doing something about it in
> > musl.
> >
> > (and even then it will take time for it to be usable in user
> > code: requires widely deployed hw, protocol and file format
> > updates, new algorithm designs and review of existing algorithms
> > for compatibility)
>
> In addition, Posit support is going to run into the problem that IEEE
> 754 implementations are readily available /right now/ and are "good
> enough" for most applications. Hell, most applications don't even
> require floating-point at all. The good can sometimes be the enemy of
> the perfect, but here I am squarely on the side of pragmatism.
Not only are they "good enough"; pretty much everyone except the
inventor of posits and a small fan club thinks IEEE 754 floating point
is *better* than posits.
In any case, musl is not a platform for launching and promoting new
experimental ideas. We implement interfaces where there is already
widespread consensus that they belong in libc, either as a result of a
published standard or agreement between multiple libc implementations
across different systems(*). The Austin Group (responsible for POSIX)
generally doesn't accept new ideas without a sponsor (an existing
implementation already committed to it) and agreement from members.
WG14 (C standard) sometimes takes new ideas but they usually turn out
botched when it happens; ideally they go through optional TRs first
and only become standard if they're widely liked.
Rich
(*) That's not entirely true. We also have thin syscall wrappers for
Linux-specific functionality, on the presumption that it's not any big
design or bloat commitment, but sometimes this turns out to be false,
e.g. in the case of sendmmsg/recvmmsg.
> Posits are lightweight, fast Ah, this is a difference in the interpretation of words. I meant lightweight in the sense that you can add two numbers in one instruction without linking with and calling additional emulation code, and fast in the sense that you can do from one to four additions per cycle depending on what else you have to do at the same time. [1] > and produce the same reasults across platforms, something which IEEE 754 doesn't guarantee. To top that, IEEE 754 isn't even a standard but just a set of guidelines which are usually implemented incorrectly due to misinterpretation or lack of expertise. So in that sense, Posits are safer than Floating-point. This is a remarkable exaggeration. The actual differences of interpretation between IEEE 754 implementations are extremely minor for basic operations (it would be inappropriate to start discussing them considering the current level of discourse). On the other hand, there are differences between platforms in the implementations of trigonometric and other math functions. This was a well-weighted decision by the IEEE 754 standardization committee, in order not to stifle research into better implementations for these functions. Surely you are not referring to such differences? Because I do not see how posits fix these (apart from having no implementations for math functions). If you bother to investigate where differences in floating-point computations come from, as David Monniaux did [2], you end up with the conclusion that all the inconsistencies come from: A- hardware-design decisions make in the 1980s when it was not interesting to implement more than one format, so that several FPU implemented the (widest feasible) 80-bit double-extended format and let compilers generate code that emulated single- and double-precision imperfectly from that. Compilers could have done better but the hardware design decisions are partially to blame. Modern desktop hardware provides single- and double-precision operations, of course, which IEEE 754 was smart enough to standardize before it was practical to have all three in a single FPU. B- compilers who produce code that violate IEEE 754 even when the hardware offers a perfect implementation of it, e.g. generating the FMA instruction for code that contains a multiplication and an addition. I reluctantly admit that you have a point: all the 0 processors who provide posit basic operations implement the exact same semantics, and the 0 compilers who support posits avoid the troublesome optimizations that plague some programming languages. Note that Type I unums have been the greatest floating-point format ever invented (according to their inventor) for longer than posits have existed, so if we did have hardware implementing these ideas, it would be probably 60% Type I unum implementations, 35% Type II unums, and 5% posits. You would not actually get the same results across platforms for basic operations. In 2023, Gustafson will have a better idea yet. Better wait for unums Type IV. [1] https://www.agner.org/optimize/instruction_tables.pdf [2] https://hal.archives-ouvertes.fr/hal-00128124v5
[-- Attachment #1: Type: text/plain, Size: 1049 bytes --] I’ll bet anybody $100 that there will be practically useful quantum computers before there’s a native hardware implementation of POSIT or UNUM in a widely available general purpose processor (something a normal person can buy). Every trend in the industry right now favors fixed-point or low-precision floating-point for AI. It’s known that one can build fast and accurate arbitrary precision from AI widgets (https://arxiv.org/abs/1904.06376) and this, not crazy new formats with which nobody has any serious experience, is the trend that numerical computing is going to exploit in the future. If somebody believes in new floating-point formats, they should release a library for a widely available FPGA and prove out the method on at least a few million lines of numerical code to show the community why they need to throw away decades of numerical software and rewrite a few billion lines of codes in an as yet to be standardized datatype. Jeff -- Jeff Hammond jeff.science@gmail.com http://jeffhammond.github.io/ [-- Attachment #2: Type: text/html, Size: 1452 bytes --]