HN2new | past | comments | ask | show | jobs | submitlogin

Not only do I think there will not be a winner take all, I think it's very likely that the entire thing will be commoditized.

I think it's likely that we will eventually we hit a point of diminishing returns where the performance is good enough and marginal performance improvements aren't worth the high cost.

And over time, many models will reach "good enough" levels of performance including models that are open weight. And given even more time, these open weight models will be runnable on consumer level hardware. Eventually, they'll be runnable on super cheap consumer hardware (something more akin to a NPU than a $2000 RTX 5090). So your laptop in 2035 with specialize AI cores and 1TB of LPDDR10 ram is running GPT-7 level models without breaking a sweat. Maybe GPT-10 can solve some obscure math problem that your model can't but does it even matter? Would you pay for GPT-10 when running a GPT-7 level model does everything you need and is practically free?

The cloud providers will make money because there will still be a need for companies to host the models in a secure and reliable way. But a company whose main business strategy is developing the model? I'm not sure they will last without finding another way to add value.



> Not only do I think there will not be a winner take all, I think it's very likely that the entire thing will be commoditized

This begs the question, why then do AI companies have these insane valuations? Do investorsknow something that we don't?


Investors, especially venture investors, are chasing a small chance of a huge win. If there's a 10% or even a 1% chance of a company dominating the economy, that's enough to support a huge valuation even if the median outcome is very bad.


I could certainly be wrong. Maybe I'm just not thinking creatively enough.

I just don't see how this doesn't get commoditized in the end unless hardware progress just halts. I get that a true AGI would have immeasurable value even if it's not valuable to end users. So the business model might change from charging $xxx/month for access to a chat bot to something else (maybe charging millions or billions of dollars to companies in the medical and technology sector for automated R&D). But even if one company gets AGI and then unleashes it on creating ever more advanced models, I don't see that being an advantage for the long term because the AGI will still be bottlenecked by physical hardware (the speed of a single GPU, the total number of GPUs the AGI's owner can acquire, even the number of data centers they can build). That will give the competition time to catch up and build their own AGI. So I don't see the end of AGI race being the point where the winner gets all the spoils.

And then eventually there will be AGI capable open weight models that are runnable on cheap hardware.

The only way the current state can continue is if there is always strong demand for ever increasingly intelligent models forever and always with no regard for their cost (both monetarily and environmentally). Maybe there is. Like maybe you can't build and maintain a dyson sphere (or whatever sufficiently advanced technology) with just an Einstein equivalent AGI. Maybe you need an AGI that is 1000x more intelligent than Einstein and so there is always a buyer.


You're forgetting the cost of training.

Running the inference might commoditize. But the dataset required and the hardware+time+know-how isn't easy to replicate.

It's not like someone can just show up and train a competitive model without investing millions.


Investors are often irrational in the short term. Personally, I think it’s a combination of FOMO, wishful thinking, and herd following.


"Billionaire investors are more irrational than me, a social media poster."


Zuckerberg has spent over fifty billion dollars on the idea that people will want to play a Miiverse game where they can attend meetings in VR and buy virtual real estate. It's like the Spanish emptying Potosi to buy endless mercenaries.


I mean, why do you think they have any idea on how a completely new thing will turn out?

They are speculating. If they are any good, then they do it with an acceptable risk profile.


The correlation between "speculator is a billionaire" and "speculator is good at predicting things" is much higher than the correlation between "guy has a HN account" and "guy knows more about the future of the AI industry than the people directly investing in it".

And he doesn't just think he has an edge, he thinks he has superior rationality.


Past performance is not indicative of future results.

You would need ~30 years of continuously beating the market to be able to claim that you are statistically likely to be better than random chance.

Does your average speculator have 30 years of experience beating the market, or were they just lucky?


I haven’t heard that statistic before. And the formulation seems imprecise? Does continuously beating the market mean that every single minute your portfolio value gains relative to the market?


"You would need ~30 years of continuously beating the market to be able to claim that you are statistically likely to be better than random chance."

You use the word statistically as if you didn't just pull "~30 years" out of nowhere with no statistics. And people become billionaires by making longshot bets on industry changes, not by playing the market while they work a 9-5.

"Does your average speculator have 30 years of experience beating the market, or were they just lucky?"

The average speculator isn't even allowed to invest in OpenAI or these other AI companies. If they bought Google stock, they'd mostly be buying into Google's other revenue streams.

You could just cut to the chase and invoke the Efficient Market Hypothesis, but that's easily rebuked here because the AI industry is not in an efficient market with information symmetry and open investing.


"Having money is proof of intelligence"


It kinda is, at least I'd say a rich person is on average more intelligent than a poor person.


Anyone who believes this hasn't spent enough time around rich people. Rich people are almost always rich because they come from other rich people. They're exactly as smart as poor people, except the rich folk have a much, much cushier landing if they fail so they can take on more risk more often. It's much easier to succeed and look smart if you can just reload your save and try over and over.


Why do you think that? Do you have data or is it just, like, your vibe?


One can apply a brief sanity check via reductio ad absurdum: it is less logical to assume that poor individuals possess greater intelligence than wealthy individuals.

Increased levels of stress, reduced consumption of healthcare, fewer education opportunities, higher likelihood of being subjected to trauma, and so forth paint a picture of correlation between wealth and cognitive functionality.


Yeah, that's not a good argument. That might be true for the very poor, sure, but not for the majority of the lower-to-middle of the middle class. There's fundamentally no difference between your average blue collar worker and a billionaire, except the billionaire almost certainly had rich parents and got lucky.

People really don't like the "they're not, they just got lucky" statement and will do a lot of things to rationalize it away lol.


> lower-to-middle of the middle class

The comparison was clearly between the rich and the poor. We can take the 99.99th wealth percentile, where billionaires reside, and contrast that to a narrow range on the opposite side of the spectrum. But, in my opinion, the argument would still hold even if it were the top 10% vs bottom 10% (or equivalent by normalised population).


Counter point - rich people would remain rich, and we would have an ossified society if this was true.

Intelligence is not a singular pre-requisite to wealth or “to be rich”.

People can specialize in being intelligent, educated, well read, and more - while still being poor.

And we know that most entrepreneurs fail, which is why VCs function the way they do.



It does seem like common sense that they would be linked. But there is also research:

https://thesocietypages.org/socimages/2008/02/06/correlation...


The top companies are already doing double digit billions in revenue. They're valuations aren't insane given that.


I wonder if that revenue might be short-lived when the free version of most AI's is good enough for almost all use cases.


This would explain why OpenAI and others seem to be pushing much harder into the B2B/api applications. It feels like we're on to distribution capture as the differentiator now.


because ppl are using claude code not cursor




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: