A little while ago, Eric had a go at writing an interpreter for LanguageKit. The point was to be able to use it for debugging, use it on OS X, and use it when the overhead of a full compiler is too much.
There were a few bugs in the interpreter and Eric got distracted implementing CoreGraphics and doing a few other things. This weekend, I picked up the interpreter and fixed some bugs. It turned out not to require much work; Eric had already done the difficult bits. After a little while, the interpreter was passing more tests than the compiler. This was slightly embarrassing, so I started fixing bugs in the compiler too.
Now, the entire Smalltalk test suite (thanks again Günther!) passes with both the interpreter and the compiler. It's quite nice to see that the interpreter has reasonable performance here. Running the test suite with the compiler:
$ time sh runall.sh -q .................................... 36 tests run. 36 passed, 0 failed. real 0m37.538s user 0m9.048s sys 0m4.674s
And with the interpreter:
$ time sh runall.sh -q .................................... 36 tests run. 36 passed, 0 failed. real 0m28.365s user 0m6.463s sys 0m3.935s
As you can see, the interpreter takes less time to run the test suite, both in terms of wall and CPU time. This might seem surprising - after all, the entire point of the compiler is speed - but it makes sense once you remember how small the tests are. For most of the tests, if you enable timing in edlc, you get a message like this at the end:
Smalltalk execution took 0.000000 seconds. Peak used 32592KB.
The amount of time spent running the code is so small that it's lost down in rounding errors when you convert it to seconds. You get something similar from the interpreter. The time spent compiling and optimising the code is quite a bit more than the time spent actually running it.
This means that the interpreter is a good choice for short-lived scripts. Now that it's working properly, we can start thinking about lazy compilation, where we only compile the methods that are called frequently, or better feedback-driven optimisation, where we collect the profiling information in the interpreter and then compile the optimised one later.