|
Glenn Edgar Admin wrote:
I have the same problem. I have to be agressive about forcing the garbage collector during loops. When newbe's run lua for the first time on embedded systems, they have the same problem, even on heaps in the mega bytes. ( They run over night tests.) Lua gets a black eye because they think is memory leaks inside the engine, and I have to do a song and dance to smooth out the situation.I thought about changing it, but in my line of work it is important not to touch the baseline and only extend and embedded.Glenn EdgarOn 3/22/07, * Ralph Hempel* <rhempel@hempeldesigngroup.com <mailto:rhempel@hempeldesigngroup.com>> wrote:I've got a bit of a problem that I hope some gurus can help me out with. I'm running 5.1.1 on a very memory constrained system. It's the LEGO NXT and I have about 52K of RAM allocated to the heap, which is controlled by malloc() Once I've loaded the API for the NXT peripherals (C binding to Lua) the gcinfo() tells me I have about 16K left - which I find hard to believe. There are about 40 calls to C functions. What can I tune in the source to force the gc to be more aggressive, and can it work if I give it a fixed block of memory that can never be expanded? Also, can I force the gc to run during script evaluation? Sometimes if a script is large or complex, the malloc() I have simply bails and dies saying it can't allocate any more memory. Cheers, Ralph
You can tune the garbage collector as described in the reference manual: http://www.lua.org/manual/5.1/manual.html#pdf-collectgarbageThe key settings are setpause and setstepmul; I've found that in tight loops with a lot of allocation of small objects (usually strings) and where there are large tables of static reference data, you need to make the garbage collector more aggressive; some experimentation may be necessary to come up with a better tuning in your environment. Generally, I adjust the pause to between 150 and 175, instead of the default 200, but even smaller values are plausible. (Those are percentages, so 200 actually means 2.0).
The consequence of basing the gc's behaviour on the amount of memory actually used is that gc tuning can be suboptimal if there is a lot of static data; transient objects can pile up until they reach the size of the used heap at the end of the previous gc cycle (by default). So if you have 128 megabytes of stuff which never changes, gc will be deferred at the end of every gc cycle until 128 megabytes of transient objects have built up. At that point it will start collecting garbage at a rate roughly approximating twice the allocation rate (by default), but if the allocations are smallish, quite a few can build up before each gc step, and in extreme cases, memory usage can even continue to increase for a while. In some ways, it might be better if there a way of setting an expected heap floor, to compensate for such situations, so that the pause would be based on a formula like p * (usage - floor) instead of just p * usage. That might be even harder to tune, though.