HN2new | past | comments | ask | show | jobs | submitlogin

From what I heard modern Intel chips basically only keep up the x86 instruction set as a facade and the architecture beneath is different (much larger number of registers etc.). Wouldn't it potentially be a good idea to do a clean redesign of the "frontend" and eliminate all the legacy support?


Hell, to take this to the extreme level, look up the PPC 615 — mid-90s IBM processor that sadly never made it to market. It ran x86, PPC32, and PPC64 code natively in the mid-90s, socket compatible with the current Intel Pentium and with comparable performance. Sadly, it was cancelled as IA-64 was expected to enter the market soon, and those at IBM presumed IA-32 (i.e., x86) was about to be killed.


Yes, modern Intel CPUs use a RISC-like architecture underneath. The CPU contains a decoder unit which converts x86 instructions to RISC-like micro-ops. Getting rid of x86 support would not be a good idea. It's their "legacy support" which has allowed them to dominate the desktop and server market. Porting your software to a new architecture can be a real pain.


I wonder if it would be reasonable to come up with a more modern ISA (or just borrow somebody else's, like AArch64?) and offer that as an alternate front end.

Keep the x86 decoder front end that they have now. Add another one for the better ISA. Add another mode that kicks the CPU into that ISA. Current x86-64 OSes already generally support two architectures: x86-64 and i386. This would just be a third one. Then everything could move to the new ISA incrementally, and legacy software could keep on working forever using the legacy decoder.

I'd guess that the x86 ISA is no longer enough of a bottleneck to justify it. Throw enough transistors at the problem and perhaps it doesn't matter anymore whether your ISA makes any sense.


Couldn't the decode unit be translated into software that converts legacy software on the fly? It seems to me that the outdated instruction set is detrimental to innovation and power efficiency. Also how high is the overhead in terms of chip space and power that the decoder incurs, vs. one that has to decode a simpler instruction set? My limited understanding is that the number of instructions issued per cycle heavily depend on the instruction set and decode speed.


> Couldn't the decode unit be translated into software that converts legacy software on the fly?

Not only is it possible, but it has been done. Look up the Transmeta Efficeon. Note that it's not RISC-like but VLIW (which I like to think of as "the next thing after RISC").


Transmeta tried to do exactly what you are proposing i.e, have a software layer (Code Morphing) translate the x86 instruction stream to their native VLIW instruction set. They didn't go very far.


In the case of Intel the underlying architecture is really fast, whereas my understanding is that Transmeta failed because the VLIW architecture did not really work out.

If Intel Cpus use a Risc like architecture nothing would prevent static translation to it, which was not feasible for Transmeta cpus. The x86 software layer would then only be there to preserve backwards compatibility.


You just suggested that decode is a bottleneck, and that decode should be done in software, in the same paragraph.


Not really, I suggested to decode once at the start of the application, or even at compile time for that matter in order to gain legacy support. A cpu has to decode the instruction stream no matter what, if the decoding is complicated by the fact that there is no close correspondence between the instructions and the hardware this is a potential bottleneck.


That's basically what Itanium was and it failed vs. the AMD competitor that kept support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: