HN2new | past | comments | ask | show | jobs | submitlogin

Why does the partial evaluation approach result in larger performance variance than meta-tracing in the reported results?


Yes, that's indeed a none-obvious issue, and looks rather strange on the graph.

That's not the 'partial evaluation' per se. Instead, it is the difference between RPython and HotSpot that surfaces here. RPython does currently generated singled threaded VMs, and neither the GC nor the compilation are done in separate threads. The HotSpot JVM however can do those things in parallel and additionally has other infrastructure threads running. In the end, this increases the likelihood that the OS reschedules the application thread, which becomes visible as jitter.


Oh, makes sense :) Is that mentioned in the paper and I missed it? If not, it might deserve a footnote, as the difference is glaring.

BTW, would Graal really become available as a stock-HotSpot plugin in Java 9 thanks to JEP 243? I see things are ready on Graal's end[1], but are they on HotSpot's?

[1]: https://bugs.openjdk.java.net/browse/GRAAL-49




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: