Surely the could put a traditional upper stage on Super Heavy and just go directly to the moon, no? I’m not sure what the obsession with second stage reuse is, because you lose almost all your margin.
I have been saying this for a while now. We have barely scratched the surface on both algorithmic and hardware optimizations for AI. I suspect we will definitely get many orders of magnitude speed up on high quality AI.
The real question is if it ends up “smart enough” or we take that extra compute budget and push the boundary further. Right now it seems making the models larger really only works up to a certain point.
The big problem with AI has been that it has always been so energy intensive compared to biological intelligence. However once, you bake the models into ASICs, suddenly the power consumption goes way down, and moreover the inference WILL be ~250X faster than it currently is (which is already on par with the speed of a human thinking).
That's a very scary inflection point. Imagine in 24 months, a Opus 4.6 level Diffusion based model etched directly onto silicon using the latest TSMC process node.
At that point knowledge work will incredibly commoditized.
I have Opus 4.6 one-shotting recreations of 90s videogames for less than the inflation adjusted cost of buying those original games when they were released! Now cut that cost down by 250X!
reply