David Jones wrote:
[...]
I think you're right; this is the only safe way. I'm amazed that this
has gone unnoticed. Of course, even with this solution it's _possible_
to construct a perverse C implementation where it would be undefined
behavious (if INT_MAX was huge enough to be outside the range of double).
Actually, talking to someone who knows more about floating-point
representations than I do: the Evil Hack to convert a double to an int, viz.:
#define lua_number2int(i,d) \
{ volatile union luai_Cast u; u.l_d = (d) + 6755399441055744.0; (i) = u.l_l; }
...should produce the 'right' results --- i.e. (double)-1 == (int)-1 ==
(unsigned int)0xFFFFFFFF --- everywhere where the hack does work, because the
hack relies on a particular binary representation. It's only when you do
things correctly that it will start to fail. You're on a Mac, right, Brian? If
so, you won't be using the hack.