lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


I have a C function which looks like this:

	static int testfunc(lua_State *L)
	{
		static int flag = 0;
	
		if(!flag) {
			flag = 1;
			printf("First call\n");	
			return lua_yield(L, 0);
		} else {
			printf("Second call\n");
		}

		return 0;
	}

It is called from a coroutine like this:

	co = coroutine.create(function()
		print("foo")
		testfunc()
		print("bar")
	end)

	coroutine.resume(co)
	coroutine.resume(co)

Normally, the second call to coroutine.resume() will continue executing the code at

	print("bar")

because testfunc() yields. However, instead of doing that, I'd like resume() to resume code execution at testfunc() instead of print("bar").

So I've hacked lua_yield() to return -2 instead of -1 and then in lvm.c/OP_CALL I'm checking for -2 and, if it's found, decrement the PC stored in "savedpc" to make lua_resume() run OP_CALL again. The code in lvm.c looks like this:

	...
	  } else if (firstResult > L->top) {  /* yield? */
		lua_assert(L->ci->state == (CI_C | CI_YIELD));
		(L->ci - 1)->u.l.savedpc = (firstResult == L->top + 2) ? pc - 1 : pc;
		(L->ci - 1)->state = CI_SAVEDPC;
		return NULL;
	  }
	...

However, it doesn't work as expected because on resume, OP_CALL doesn't seem to be able to find "testfunc", presumably because the result of the OP_GETGLOBAL preceding OP_CALL isn't in the VM registers any more.

When I decrement the PC by 2 instead of 1, everything works fine because OP_GETGLOBAL is called then to resolve the "testfunc" reference so that OP_CALL can find it. But this is confusing me a little because I thought that yielding/resuming would preserve the complete VM states so that it should be possible to force the VM to just re-execute the OP_CALL that was responsible for yielding but apparently that's not the case.

So is there any other way to force the VM to re-run the function that yielded when resuming code execution on a coroutine?

Note that I'm still on Lua 5.0.

-- 
Best regards,
 Andreas Falkenhahn                          mailto:andreas@falkenhahn.com