Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Another way to phrase it is that a single float value represents a range of real numbers, rather than a single real number. So 0.1 stored as a double precision float really represents the range of real numbers from roughly 0.09999999999999999862 to 0.10000000000000001249.

But the range is different for every number, it's not a constant error like precision epsilon. I think "representation error" is reasonable name, as it's used when describing the error in converting representation between base10 and base2.

> I wonder about the terminology of calling these "errors". That implies that there's a mistake, when really floats are among the most accurate ways to represent arbitrary real numbers in a finite number of bits.

If you only care about the irrational portion of real numbers maybe, but for rationals this is definitely not true, you could use a format based on fractions which unlike IEEE754 would contain no representation error compared to base 10 decimals - in fact they would even allow you to represent rationals that base10 decimals could not such as 1/3. Inigo Quilez came up with one such format "floating bar" [1]. There are advantages and disadvantages to such a format (e.g performance). But in terms of numerical error I doubt IEEE 754 is best, for representation error it is definitely not (and I say that as someone who likes the IEEE 754 design O_o).

[1] http://www.iquilezles.org/www/articles/floatingbar/floatingb...



Hm, yes, perhaps "error" is still a good word for it.

I'm not sure I understand what you mean by "holes", the idea is that all the real numbers between roughly 0.09999999999999999862 and 0.10000000000000001249 are represented by the same double precision floating point value as for 0.1.

Thinking about it in terms of ranges helps to understand why 0.1 + 0.2 goes "wrong", as the middle of the range of real numbers represented by a double precision floating point value 0.1 is slightly higher than 0.1.


Sorry I edited that away as realised it was not strictly relevant to what you meant... I was highlighting how there are values in your range that _can_ be represented exactly e.g 0.125 like I said not really relevant.


What you should be saying is not "can be represented exactly" but "can be represented identically in decimal"

There is nothing inherently less accurate about binary floating-point representation than decimal. Some binary numbers can be identical to their decimal counterparts, while others aren't. This is OK, as we should know what amount of precision is required in our answer and round to that.


> What you should be saying is not "can be represented exactly" but "can be represented identically in decimal"

No, exactly, if the decimal was non-periodic such as 0.1 it is exact, it is exactly representing 1/10, but base2 cannot represent 1/10 exactly.

> There is nothing inherently less accurate about binary floating-point representation than decimal.

Yes there is, and this is the part that most people do not intuit, this was the entire point i was making about the deceptiveness of formatting masking representation error of the fractional part in my original comment... we are not talking merely about precision, but the representation error which is dependent on the base:

       base10  base3  base2
 1/10  yes     no     no
 1/3   no      yes    no
 1/2   yes     no     yes
For the first decimal place in base 10 (i.e 0 through 9 denominators of 10) you will find only 1 out of 10 possible fractional parts can be represented exactly in IEEE 754 binary. IEEE 754 actually specifies a decimal format, not that it's ever used, but if you were to implement it you would see these discrepancies between binary and decimal using the same format at any precision by noticing a periodic significand in the encoding when the source representation is non-periodic.

This is not a deficiency of IEEE 754 per say, but the entire concept of a decimal point in any base, which makes it impossible to finitely represent all rational numbers, kind of making them pseudo irrational numbers as a side-effect of representation... the solution is to use fractions of course.


At least for me, that's the most interesting part of it. (0.125 isn't in the range I mentioned above though)


Ok this is getting a bit meta recursive, I made a mistake in the explanation of my mistake that I attempted to edit away before you replied to it. Anyway, I was talking about the range above your range, yo dawg :D...

How if you took a limited precision range of rationals in base 10, e.g 0.10000 to 0.20000 and convert them to base 2, there are "holes" of non-representable numbers in the range. These holes are of different sizes, (one of which is the range you are talking about), so I summarized it as that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: