> Because PyPy is a JIT compiler it's main advantages come from long run times and simple types (such as numbers).
It is not inherent to JIT compilers that they need long running times or simple types to show benefit. LuaJIT demonstrates this. Consider this simple program that runs in under a second and operates only on strings:
vals = {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o"}
for _, v in ipairs(vals) do
for _, w in ipairs(vals) do
for _, x in ipairs(vals) do
for _, y in ipairs(vals) do
for _, z in ipairs(vals) do
if v .. w .. x .. y .. z == "abcde" then
print(".")
end
end
end
end
end
end
$ lua -v
Lua 5.2.1 Copyright (C) 1994-2012 Lua.org, PUC-Rio
$ time lua ../test.lua
.
real 0m0.606s
user 0m0.599s
sys 0m0.004s
$ luajit -v
LuaJIT 2.0.2 -- Copyright (C) 2005-2013 Mike Pall. http://luajit.org/
$ time ./luajit ../test.lua
.
real 0m0.239s
user 0m0.231s
sys 0m0.003s
LuaJIT is over twice the speed of the (already fast) Lua interpreter here for a program that runs in under a second.
People shouldn't take the heavyweight architectures of the JVM, PyPy, etc. as evidence that JITs are inherently heavy. It's just not true. JITs can be lightweight and fast even for short-running programs.
EDIT: it occurred to me that this might not be a great example because LuaJIT isn't actually generating assembly here and is probably winning just because its platform-specific interpreter is faster. However it is still the case that it is instrumenting the code's execution and paying the execution costs associated with attempting to find traces to compile. So even with these JIT-compiler overheads it is still beating the plain interpreter which is only interpreting.
PyPy also manages to speed this program up (or at least, what I understand this program to be):
Alexanders-MacBook-Pro:tmp alex_gaynor$ time python t.py
.
real 0m0.202s
user 0m0.194s
sys 0m0.007s
Alexanders-MacBook-Pro:tmp alex_gaynor$ time python t.py
.
real 0m0.192s
user 0m0.184s
sys 0m0.008s
Alexanders-MacBook-Pro:tmp alex_gaynor$ time python t.py
.
real 0m0.198s
user 0m0.190s
sys 0m0.007s
Alexanders-MacBook-Pro:tmp alex_gaynor$ time pypy t.py
.
real 0m0.083s
user 0m0.068s
sys 0m0.013s
Alexanders-MacBook-Pro:tmp alex_gaynor$ time pypy t.py
.
real 0m0.083s
user 0m0.068s
sys 0m0.013s
Alexanders-MacBook-Pro:tmp alex_gaynor$ time pypy t.py
.
real 0m0.082s
user 0m0.067s
sys 0m0.013s
Alexanders-MacBook-Pro:tmp alex_gaynor$ cat t.py
def main():
vals = {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o"}
for v in vals:
for w in vals:
for x in vals:
for y in vals:
for z in vals:
if v + w + x + y + z == "abcde":
print(".")
main()
Just tried it. Using a list instead of a set literal sped up the program slightly (~350 ms -> ~330 ms for python and ~155 ms -> ~140 ms for pypy on my computer).
I recommend to use a set of toy programs like those from the Benchmarks Game. Some nested loops with a little bit of string concatenation won't really tell you anything useful.
~$ time python t.py
.
real 0m0.350s
user 0m0.304s
sys 0m0.024s
~$ time lua t.lua
.
real 0m0.255s
user 0m0.248s
sys 0m0.004s
~$ time luajit t.lua
.
real 0m0.157s
user 0m0.144s
sys 0m0.012s
I unfortunately had to edit the program after I took the measurements because HN cut off my program horizontally. So I removed one or two letters from the list.
Another example is all modern JS engines — they are JITing compilers very much tuned for the short-runtime use-case (often to the detriment of long-runtime use-case).
It is not inherent to JIT compilers that they need long running times or simple types to show benefit. LuaJIT demonstrates this. Consider this simple program that runs in under a second and operates only on strings:
LuaJIT is over twice the speed of the (already fast) Lua interpreter here for a program that runs in under a second.People shouldn't take the heavyweight architectures of the JVM, PyPy, etc. as evidence that JITs are inherently heavy. It's just not true. JITs can be lightweight and fast even for short-running programs.
EDIT: it occurred to me that this might not be a great example because LuaJIT isn't actually generating assembly here and is probably winning just because its platform-specific interpreter is faster. However it is still the case that it is instrumenting the code's execution and paying the execution costs associated with attempting to find traces to compile. So even with these JIT-compiler overheads it is still beating the plain interpreter which is only interpreting.