lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


A small follow-up, just to be on the safe side: I'm not suggesting in
any way that a chained allocator would be a better choice for Lua, as
this is clearly not the case. As I previously specified, I only
implemented the chained allocator simply as an exercise that would
give me easy access to memory statistics, in order to have an image of
the memory fragmentation caused by Lua. Clearly, such an allocator
cannot help much with fragmentation because of its architecture; a
seggregated allocator is a much better choice. In my tests I used a
'combined' allocator (fixed 32-bytes blocks + the chained one) and
tried to run different tests (such as bisect.lua, factorial.lua,
speed-test.lua) in a simulated environment, constrained to a maximum
of 64k RAM. With my combined allocator I managed to run factorial.lua
only after lots of tweaking, and speed-test.lua didn't run at all, as
it needed at least 66k of RAM to run. With an out-of-the-box TLSF I
was able to run speed-test.lua, and factorial.lua took much less
memory (bisect.lua wasn't a problem in either case). Which goes to
show that even if TLSF needs some memory to keep its internal data
structures, in the end it's still much better.