[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Fwd: Re: ANN: LuaJIT 1.1.0
- From: Mike Pall <mikelu-0603@...>
- Date: Wed, 15 Mar 2006 03:56:15 +0100
Hi,
therandthem wrote:
> On october 15, 2005 Mike Pall wrote:
> > But note that GCC, Java, Python and Perl suffer badly
> > from unimplemented or not working benchmarks.
> > Realistically their scores would be higher.
Well, five months later many things changed. All of them have
improved their entries. And some of them have been upgraded to
newer versions with better performance (Java in particular). The
scoring system is now linear and not logarithmic and the default
weights are all 1.
Cross-language comparisons always have a big uncertainty factor.
But if you are willing to accept this, check the current results:
http://shootout.alioth.debian.org/gp4sandbox/benchmark.php?test=all&lang=all
The ratio column roughly corresponds to how much slower a
language implementation is than the top performer. Here are a few
languages Lua is often compared to (results of March 15th, 2006):
1.0x C gcc
1.5x C++ g++
1.7x Java JDK -server
2.1x Java JDK -client
2.9x C# Mono
4.0x Python Psyco
------------------------ ^^^-- Compilers vvv-- Interpreters
5.7x Lua
7.5x Python
8.3x Tcl
12.4x Perl
17.4x Ruby
19.9x PHP
[Of course anyone selecting a language for a particular task
should really adjust the weights depending on the required
functionality. Factoring in line count (programmer productivity)
or memory use will push Lua up higher, too.]
Ok, so Lua is the fastest interpreter (with the default weights).
But you knew this already.
Other languages would perform much worse if they couldn't rely on
a wide selection of library functions written in C. This is really
benchmarking the language + standard library combination here.
It's hard to judge where LuaJIT 1.1.0 would end up. The machine
the benchmarks are run on is different than mine and I don't know
how Lua has been compiled there. Extrapolating from the Lua
scores I guess the ratio would be between 2.0x and 2.7x.
Comparing the size of the nearest contenders and the effort that
has been put into them, this isn't too bad. Aiming for 1.69x now. ;-)
Bye,
Mike