[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Should we explicitly cast from luaL_checkinteger() et al?
- From: William Ahern <william@...>
- Date: Thu, 22 Jan 2015 19:37:43 -0800
On Fri, Jan 23, 2015 at 01:05:48AM +0200, Niccolo Medici wrote:
<snip>
> Lua 5.3 deprecates luaL_checkint, luaL_checklong, luaL_optint,
> luaL_optlong, which were just convenience macros calling
> luaL_{check|opt}integer.
>
> While we can still use them (because of -DLUA_COMPAT_5_2), they aren't
> mentioned in the user manual and we're advised to use
> luaL_{check|opt}integer "with a type cast".
>
> Now, I'm not an expert in C and I was wondering:
>
> (1) Is a cast needed in simple cases like the following?
>
> int i;
> i = (int)luaL_checkinteger(L, 1);
No. As far as the C standard is concerned, it's not needed in such a case.
It behaves identically to an implicit conversion.
> (2) If a cast isn't needed here, where *is* it needed?
Assuming an architecture where sizeof (int) < sizeof (lua_Integer), then the
following would print two different results:
printf("%zu\n", sizeof luaL_checkinteger(0, 0));
printf("%zu\n", sizeof ((int)luaL_checkinteger(0, 0)));
That's one reason why a macro like luaL_checkint would use a cast.
Another reason is that some compilers might issue warnings about implicit
conversions, even if the code is 100% correct, yet remain silent if there's
an explicit cast. Normally this is a reason _not_ to cast! Casts should be
avoided because they can also cause some compilers to accept code which is
absolutely incorrect. But some consider it good etiquette for library
headers to use casts in situations similar to the above so developers who
have compiler diagnostics cranked all the way up so they don't think the
library is broken or otherwise have to deal with the diagnostic noise.
However, signed conversions (whether implicit or with a cast) can still lead
to undefined behavior at _runtime_, even if the code construct is
well-defined.
For example, assuming the same architecture as above, then
int i;
lua_pushinteger(L, (lua_Integer)INT_MAX + 1);
i = luaL_checkinteger(L, -1);
leads to undefined behavior at runtime. That's because the value returned by
luaL_checkinteger cannot be represented as an int, and unlike unsigned types
C doesn't define or specify how to translate the value. You have to worry
about two kinds of behavior--during compilation and during runtime.
<snip>
> (4) Aren't we losing "documentation" by not having the words
> "int"/"long" embedded in the function name?
lua_Integer is not necessarily the same type as either int or long. For
example on Windows, where lua_Integer will default to a 64-bit type but
where long is 32 bits.
Note that while lua_tointeger and similar have runtime logic to check that
the integral part of a float is representable by a lua_Integer type,
luaL_optint, luaL_checkint, etc never had such logic. They just used
explicit casts. This is documented, but I think it gives people a false
sense of security. Arithmetic conversions can be problematic, and signed
conversions especially so. (Conversions to unsigned are guaranteed to
exhibit modulo behavior--i.e. twos complement for someone who insists on
conflating the hardware with the abstract machine.)
IMHO it's best to use lua_Integer as much as possible, and be careful where
you convert from lua_Integer to another type. Because most C implementations
don't do bounds checking (they can, believe it not, and some do), it's
important to familiarize oneself with arithmetic conversions. The C standard
is your friend, and unlike the C++ standard it's pretty easy to read and
understand.