At 10:53 AM 5/4/2002 +0200, Xavier Leroy wrote: >I'm not taking sides here, just noticing that Java takes the computer >engineering viewpoint and C (and Caml, by inheritance of >implementation :-) takes the physicist's viewpoint... I think it is more likely that C "just happened". Among other things, White and Steele had not yet written their paper on converting machine numbers to strings at the time C was created. The computer engineering viewpoint (in this case) is the correct one for GENERAL use, because it guarantees that no matter how many times you parse and unparse a floating point number, you keep the same value. This seems like a good thing to me. It may be a value that is an approximation to an inaccurately measured quantity, but after the initial input-to-FP translation, the wiggling stops. This does have one problem, in that the number of digits printed may vary quite a bit (e.g., between 1.25 and 1.333333333333333) depending upon whether a number has an exact binary representation. Because of this problem, and because the number of actually meaningful digits may vary, it makes plenty of sense to have ways to print that do allow finer control. I am not sure that either C or Java is worth blindly following here. Java's decimal format code is a bit baroque in its corner cases (depending upon how you encode the # and 0 and . and , in its input, you can describe normal notation, scientific notation, and engineering-scientific notation. Unfortunately, the input notation allows you to say many things that are nonsensical, and it is not immediately evident what is desired by looking at a format string). It's also not entirely clear what rounding you are supposed to get when you ask for less-precise printing. The possibilities include: 1. round-toward-zero 2. round-toward-positive-infinity 3. round-toward-negative-infinity 4. round-toward-infinity 5. round-to-nearest-even-if-tie (1.5 --> 2, 2.5 --> 2) 6. round-to-nearest-rti-if-tie (1.5 --> 2, 2.5 --> 3, -1.5 --> -2) 1-3,5 correspond to modes often found supported by hardware. 5 is the default answer from numerical people; I think it loses the smallest amount of information without introducing a bias. 6 is what I think most people expect to see. What often gets implemented (e.g., by Sun in their implementation of java.text.DecimalFormat) is actually a combination of two of these -- first the number is formatted out to "full" (adequately precise) form (this assumes some sort of rounding), and then that is reduced in size using some more rounding. Double-rounding is bad -- in the worst case, you might arrive at 1.45 rounding (once) to 1.5 rounding (twice) to 2.0, which is clearly wrong. The magnitude of the error is generally much smaller in actual formatting (1.4999999999995 rounding to 1.5, e.g.) but it is still an error, and it is avoidable at low or no cost. Gdtoa from netlib will do this for you -- I have used it myself, for an implementation of java.text.DecimalFormat -- and it provides control over the rounding of the output as well. (I've attached a test program that illustrates this, crudely. It requires gdtoa, obviously.) But (on the other hand) I really haven't a good idea how someone might go about elegantly specifying desired rounding in formatting. It does matter -- people working with money (at least in the US) have very definite opinions about how half of anything is supposed to round (it rounds away from zero, towards the nearest infinity). David Chase