On 14/01/16 14:11, Roberto Ierusalimschy wrote:
Inf feels worse than just losing the low-order bits of the
value. It
would have to be an extremely large value that's somehow posing
as an
int (at least 308 digits long) to not even be able to fit in a
double.
Inf means the value is larger than what we can represent correctly.
-- Roberto
Yeah, that seems right from the basic user's point of view. A
basic user only knows "number", double or int being something that
happens inside the VM and mostly does the "right thing". Having
sometimes inf and sometimes overflow is leaking detail the user
might not want to know. For most people 2 and 2.0 are the same
number.
Other approach is to think how you explain away to a final user
some of the usual float weirdness (like the mentioned 0.1+0.1...):
you say something like "you know, computers work like this", and
then mention something about periodic numbers and finite memory,
and the user is happy because he learned something about
computers. Now, if he asks "why this sometimes gives infinity, and
sometimes some random negative number?", and you explain it, he
will say "you mean i have to always remember to type .0 after each
number?" and will roll his eyes.