Not only do I think there will not be a winner take all, I think it's very likely that the entire thing will be commoditized.
I think it's likely that we will eventually we hit a point of diminishing returns where the performance is good enough and marginal performance improvements aren't worth the high cost.
And over time, many models will reach "good enough" levels of performance including models that are open weight. And given even more time, these open weight models will be runnable on consumer level hardware. Eventually, they'll be runnable on super cheap consumer hardware (something more akin to a NPU than a $2000 RTX 5090). So your laptop in 2035 with specialize AI cores and 1TB of LPDDR10 ram is running GPT-7 level models without breaking a sweat. Maybe GPT-10 can solve some obscure math problem that your model can't but does it even matter? Would you pay for GPT-10 when running a GPT-7 level model does everything you need and is practically free?
The cloud providers will make money because there will still be a need for companies to host the models in a secure and reliable way. But a company whose main business strategy is developing the model? I'm not sure they will last without finding another way to add value.
Investors, especially venture investors, are chasing a small chance of a huge win. If there's a 10% or even a 1% chance of a company dominating the economy, that's enough to support a huge valuation even if the median outcome is very bad.
I could certainly be wrong. Maybe I'm just not thinking creatively enough.
I just don't see how this doesn't get commoditized in the end unless hardware progress just halts. I get that a true AGI would have immeasurable value even if it's not valuable to end users. So the business model might change from charging $xxx/month for access to a chat bot to something else (maybe charging millions or billions of dollars to companies in the medical and technology sector for automated R&D). But even if one company gets AGI and then unleashes it on creating ever more advanced models, I don't see that being an advantage for the long term because the AGI will still be bottlenecked by physical hardware (the speed of a single GPU, the total number of GPUs the AGI's owner can acquire, even the number of data centers they can build). That will give the competition time to catch up and build their own AGI. So I don't see the end of AGI race being the point where the winner gets all the spoils.
And then eventually there will be AGI capable open weight models that are runnable on cheap hardware.
The only way the current state can continue is if there is always strong demand for ever increasingly intelligent models forever and always with no regard for their cost (both monetarily and environmentally). Maybe there is. Like maybe you can't build and maintain a dyson sphere (or whatever sufficiently advanced technology) with just an Einstein equivalent AGI. Maybe you need an AGI that is 1000x more intelligent than Einstein and so there is always a buyer.
Zuckerberg has spent over fifty billion dollars on the idea that people will want to play a Miiverse game where they can attend meetings in VR and buy virtual real estate. It's like the Spanish emptying Potosi to buy endless mercenaries.
The correlation between "speculator is a billionaire" and "speculator is good at predicting things" is much higher than the correlation between "guy has a HN account" and "guy knows more about the future of the AI industry than the people directly investing in it".
And he doesn't just think he has an edge, he thinks he has superior rationality.
I haven’t heard that statistic before. And the formulation seems imprecise? Does continuously beating the market mean that every single minute your portfolio value gains relative to the market?
"You would need ~30 years of continuously beating the market to be able to claim that you are statistically likely to be better than random chance."
You use the word statistically as if you didn't just pull "~30 years" out of nowhere with no statistics. And people become billionaires by making longshot bets on industry changes, not by playing the market while they work a 9-5.
"Does your average speculator have 30 years of experience beating the market, or were they just lucky?"
The average speculator isn't even allowed to invest in OpenAI or these other AI companies. If they bought Google stock, they'd mostly be buying into Google's other revenue streams.
You could just cut to the chase and invoke the Efficient Market Hypothesis, but that's easily rebuked here because the AI industry is not in an efficient market with information symmetry and open investing.
Anyone who believes this hasn't spent enough time around rich people. Rich people are almost always rich because they come from other rich people. They're exactly as smart as poor people, except the rich folk have a much, much cushier landing if they fail so they can take on more risk more often. It's much easier to succeed and look smart if you can just reload your save and try over and over.
One can apply a brief sanity check via reductio ad absurdum: it is less logical to assume that poor individuals possess greater intelligence than wealthy individuals.
Increased levels of stress, reduced consumption of healthcare, fewer education opportunities, higher likelihood of being subjected to trauma, and so forth paint a picture of correlation between wealth and cognitive functionality.
Yeah, that's not a good argument. That might be true for the very poor, sure, but not for the majority of the lower-to-middle of the middle class. There's fundamentally no difference between your average blue collar worker and a billionaire, except the billionaire almost certainly had rich parents and got lucky.
People really don't like the "they're not, they just got lucky" statement and will do a lot of things to rationalize it away lol.
The comparison was clearly between the rich and the poor. We can take the 99.99th wealth percentile, where billionaires reside, and contrast that to a narrow range on the opposite side of the spectrum. But, in my opinion, the argument would still hold even if it were the top 10% vs bottom 10% (or equivalent by normalised population).
This would explain why OpenAI and others seem to be pushing much harder into the B2B/api applications. It feels like we're on to distribution capture as the differentiator now.
I think it's likely that we will eventually we hit a point of diminishing returns where the performance is good enough and marginal performance improvements aren't worth the high cost.
And over time, many models will reach "good enough" levels of performance including models that are open weight. And given even more time, these open weight models will be runnable on consumer level hardware. Eventually, they'll be runnable on super cheap consumer hardware (something more akin to a NPU than a $2000 RTX 5090). So your laptop in 2035 with specialize AI cores and 1TB of LPDDR10 ram is running GPT-7 level models without breaking a sweat. Maybe GPT-10 can solve some obscure math problem that your model can't but does it even matter? Would you pay for GPT-10 when running a GPT-7 level model does everything you need and is practically free?
The cloud providers will make money because there will still be a need for companies to host the models in a secure and reliable way. But a company whose main business strategy is developing the model? I'm not sure they will last without finding another way to add value.