Thank you for sharing your story, and I'm so glad it worked out in the end.
This story however is also why algorithmic interviews and the supposedly "irrelevant to the real job" programming interviews are not going anywhere soon.
Having done a lot of hiring, it's surprising how many candidates do not actually know how to code despite experience and looking good on paper.
The article includes a link to https://github.com/virtualagc/virtualagc - virtual AGC which is interesting.
Although the github repo itself is not new news (see github dates), it's still good news this history is preserved.
Only thing missing is a SCSI hard drive, it's a shame for the Adapter controller to only have the CDROM connected to it. Perhaps an external SCSI RAID array?
"The name Radiation Laboratory, or "Rad Lab," was chosen to be intentionally deceptive, creating the perception to those on the outside that the laboratory was working on nuclear physics, a discipline that was seen as too immature to have an impact on the war effort. During the fall of 1940, the Rad Lab sprang to life on the MIT campus, and by December, a primitive two-parabola system had already been emplaced and was undergoing initial testing on the rooftop of Building 6 at MIT.
During the next five years, the Radiation Laboratory made stunning contributions to the development of microwave radar technology in support of the war effort. Inventions included airborne bombing radars, shipboard search radars, harbor and coastal defense radars, gun-laying radars, ground-controlled approach radars for aircraft blind landing, interrogate-friend-or-foe beacon systems, and the long-range navigation (LORAN) system. Some of the most critical contributions of the Radiation Laboratory were the microwave early-warning (MEW) radars, which effectively nullified the V-1 threat to London, and air-to-surface vessel (ASV) radars, which turned the tide on the U-boat threat to Allied shipping. In November 1942, U-boats claimed 117 Allied ships. Less than a year later, in the two-month period of September to October 1943, only 9 Allied ships were sunk, while a total of 25 U-boats were destroyed by aircraft equipped with ASV radars (Buderi, pp. 155–169). "
Sad end of an era - not bc there aren't good reasons for it, but bc there was a lot of good opportunities for shared code and skills between NGINX and k8s with this approach.
Some languages are much harder to compile well to machine code. Some big factors (for any languages) are things like: lack of static types and high "type uncertainty", other dynamic language features, established inefficient extension interfaces that have to be maintained, unusual threading models...
That makes sense if you're comparing with Java or C#, but not Ruby, which is way more dynamic than Python.
The more likely reason is that there simply hasn't been that big a push for it. Ruby was dog slow before the JIT and Rails was very popular, so there was a lot of demand and room for improvement. PHP was the primary language used by Facebook for a long time, and they had deep pockets. JS powers the web, so there's a huge incentive for companies like Google to make it faster. Python never really had that same level of investment, at least from a performance standpoint.
To your point, though, the C API has made certain types of optimizations extremely difficult, as the PyPy team has figured out.
> Python never really had that same level of investment, at least from a performance standpoint.
Or lack of incentive?
Alot of big python projects that does machine learning and data processing offloads the heavy data processing from pure python code to libraries like numpy and pandas that take advantage of C api binding to do native execution.
Google, Dropbox, and Microsoft from what I can recall all tried to make Python fast so I don’t buy the “hasn’t seen a huge amount of investment”. For a long time Guido was opposed to any changes and that ossified the ecosystem.
But the main problem was actually that pypy was never adopted as “the JIT” mechanism. That would have made a huge difference a long time ago and made sure they evolved in lock step.
Microsoft is the one the TFA refers to cryptically when it says "the Faster CPython team lost its main sponsor in 2025".
AFAIK it was not driven by anything on the tech side. It was simply unlucky timing, the project getting in the middle of Microsoft's heavy handed push to cut everything. So much so that the people who were hired by MS to work on this found out they were laid off in a middle of a conference where they were giving talks on it.
The simplest JIT just generates the machine code instructions that the interpreter loop would execute anyway. It’s not an extremely difficult thing, but it also doesn’t give you much benefit.
A worthwhile JIT is a fully optimizing compiler, and that is the hard part. Language semantics are much less important - dynamic languages aren’t particularly harder here, but the performance roof is obviously just much lower.
Agree re: different types of JITs producing wildly different results but don't agree about language semantics - even a Java JIT has to give up speed due to certain seemingly minor language and JVM issues. So both matter - no matter how good of a compiler engineer you are, some semantics are just not optimizable. Indeed, the use of a "trace JITs" is a proof of that.
This is orthogonal to the difficulty of actually implementing a JIT compiler.
It’s very much possible to make something less advanced than JVM or CLR, or even V8, that will still outperform an interpreter, even for an extremely dynamic language like Python.
As others have mentioned, the roadblock here is that interpreter internals are too public, so doing it without breaking the C API that extensions use is really hard.
I think that it's just that python people took the problem different, they made working with c and other languages better, and just made bindings for python and offloaded the performant code to these libraries. Ex: numpy
Good blog post, good balance. One thing to add is that in systems programming, very often the struct is not arbitrarily defined by the programmer - it may be defined by the hardware or another system entirely. So the Data, including its bit-by-bit layout, is primal. It kind of makes sense to have procedures to operate on it, rather than methods.
Having done a lot of hiring, it's surprising how many candidates do not actually know how to code despite experience and looking good on paper.
reply