-
what are their overhead, when used,
when not used?
Compilation
takes longer. Lua-compatible source code is bit-to-bit the same as the
one generated by Lua 5.1 compiler (modulo some probable endianness
issues), so there's no execution overhead. If you plug language
extensions, well, it depends on how these extensions are written.
So short answer: if you precompile, no overhead. If you
recompile on host before each run, there is a significant overhead in
time and memory. The gap should narrow as Metalua matures, but it will
never reach the efficiency of the well designed C compiler.
-
what are they designed for as a
primary purpose?
Lua
is roughly Scheme with tables instead of lists, a nice syntax, and no
macros. Metalua aims at removing the no-macro restriction.
-
what can be done with them?
What
you want! More precisely, you can execute arbitrary code at
compilation, typically to generate pieces of source and splice them
with the user-written sources. You can manipulate sources globally,
through +{ ... } quasi-quotes, or directly as AST (
i.e. syntax trees, essentially the same as Lisp's sexps). A library to
ease navigation and global manipulation of syntax trees is under
development.
-
what can not be done with them?
Well,
you can do whatever you want with a Turing-complete language. However,
there are some stuffs that would require significant developments to do
with Metalua:
- Target a different architecture than Lua VM. This include extending the VM in any way.
-
Create a language syntax that doesn't comply with Lua syntactic style
(i.e. mostly keyword-driven, simple, clear, non-ambiguous). You would
probably have to implement parser combinators' backtracking, which is
not very difficult per se, but would be hard to make efficient, tends
to give unhelpful error diagnostics, and encourages people to create
perl-esque unmaintainable syntaxes. Actually, I consider this
restriction a feature, not a bug.
- plugging a substantially different lexer (but that will be fixed soon). That would be reader macros in Lisp's terminology.
A
good reason to prefer a preprocessor over Metalua is when you only want
to implement simple, local syntax sugar: then, it might not be worth
getting used to deal with meta-levels, the grammar generator, and the
Metalua parser.
Another good reason is that token filters seem rather mature; Metalua is still alpha, and will remains so for months at least.
-
what can be done more efficiently
(space and speed) with one and another?
In
both cases, overhead happens only during compilation. I'd guess that
token filters are faster and easier to learn, but it's hard to build
non strictly local extensions with them (unless you add token_stream
-> AST and AST -> token_stream converters, but that would be
essentially reimplementing Metalua in a clunky way). However, I didn't
any benchmark, and don't plan to do any in the short term.
-
what are the risks of multiplying
lua idioms?
Whatever can't be compiled with the vanilla Lua 5.1 compiler must not be called Lua sources. Proper taxonomy is important.
The
lack of semantic stability of Lisp programs is Lisp's greatest
strength, but it's also probably one of the main reasons why it hasn't
been adopted as widely as expected. I tried to limit this issue in
Metalua by encouraging a clear separation of meta-levels and the
respect of some syntactic homogeneity, but the problem won't disappear.
At least you have to explicitly load your extensions from within the
source file, so when looking at a source file you know what language
variant it's written in...
Whenever you want many developers to collaborate
efficiently, you need rigorous (and smart) coding practices, so that
the code is universally intelligible within the dev team. Macros give
developers much more power: that's great, but it requires stronger
policies for teamwork. And by "the team", I mean not only the initial
developers, but also the maintainers, the forkers of an open source
projects, etc.