JIT coverage is still quite variable. For some benchmarks, e.g. richards it is close to 100%.
For others, much lower:
https://github.com/savannahostrowski/pyperf_bench/blob/main/profiling/jit.svg
Note: A fair bit of the time attributed to the interpreter is frame cleanup, so makes the interpreter fractions look larger than they are.
We can increase JIT coverage, by:
- Treating dynamic exits, like side exits: warming up and compiling a new side trace.
- Adding possible jit entry points at function entries, as well as backward edges.
Linked PRs
JIT coverage is still quite variable. For some benchmarks, e.g. richards it is close to 100%.
For others, much lower:
https://github.com/savannahostrowski/pyperf_bench/blob/main/profiling/jit.svg
Note: A fair bit of the time attributed to the interpreter is frame cleanup, so makes the interpreter fractions look larger than they are.
We can increase JIT coverage, by:
Linked PRs