HN2new | past | comments | ask | show | jobs | submitlogin

Thank you for writing the obvious. Instruction Byte count is the wrong metric here 100%. Instruction Count (given reasonable decoding/timing constraints) is the thing to optimize for and indeed variable length encoding is very bad.


Instruction byte count matters quite a lot when you're buying ROM in volume. And today, the main commercial battleground for RISCV is in the microcontroller space where people care about these things.


For those of us without the expertise, could you elaborate on why that is?

On the one hand we have byte count, with its obvious effect on cache space used. But to those of us who don't know, why is instruction count so important?

There's macro-op fusion, which admittedly would burn transistors that could be used for other things. Could you elaborate why it's not sufficient?

And then the fact that modern x86 does the opposite to macro-op fusion, by actually splitting up CISC instructions into micro-ops. Why is it so bad if they were more micro-ops to start with, if Intel chooses to do this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: