HN2new | past | comments | ask | show | jobs | submitlogin

> field shrinking based on dynamic range analysis

Could you expand a bit on this? Can't find any ressource about it, words are overloaded.



Range analysis is a compiler term that means exploiting the known range of values that are placed in a field. In a JIT language¹ this can be done even when you can't absolutely prove the range, by trapping and deoptimizing the structure if it turns out that the compiler's range assumption failed. Similarly to how JS JITs do specialization and fall back to deopt.

As a concrete example, a tree node reference could be a 16-bit index into a backing table of real tree nodes, similar to how indexed vertex data is represented in graphics code.

¹ You could do it in an AOT language too, but it's easier to think about our common JIT languages as the example case.


Much thanks.

In your example, you mean the indices are shrunk if you know the size of the table? I knew about specialization and deopt, but never suspected the principle could be applied to vertex indices in the GPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: