Interpreter Taming to Realize Multiple Compilations in a Meta-Tracing JIT Compiler Framework
The RPython framework is a toolchain that generates a VM equipped with a heavyweight trace-based JIT compiler from a bytecode interpreter. Although RPython makes it easier to create a high-performance VM, there is a dilemma: the generated VMs are hard to extend because they need to use the fixed VM components provided by RPython. Given this context, if one wishes to realize a lightweight compilation or a new heavyweight compilation with a new compilation scope, it requires much engineering effort to implement them by extending the meta-tracing JIT compiler.
We propose Multilevel RPython, which can perform two-level compilation with different compilation scopes. The compilation level of Multilevel RPython consists of a lightweight level, which emits method-based threaded code, and a heavyweight level, which emits trace-based optimizing code. Multilevel RPython is realized not by creating different compilers from scratch but by taming a bytecode interpreter given to the RPython toolchain. In other words, the lightweight compilation is performed by an interpreter tamed for threaded code generation, and the heavyweight compilation is used for tracing JIT compilation.
In this talk, we present the implementation status of Multilevel RPython. In particular, we implemented an inline caching technique for threaded code generation and a prototype of the compilation-level shifting mechanism. Both techniques are realized by taming the definition of an interpreter and a slight modification for the meta-tracing compiler.
The microbenchmark evaluation showed that inline caching makes threaded code generation approximately 20% faster than threaded code generation without the inline caching technique. In addition, we conducted a multilevel JIT experiment on an application combining large benchmark programs to simulate a real-world workload. This experiment shows the multilevel JIT compilation on Multilevel RPyhon is about 14% faster.