HN2new | past | comments | ask | show | jobs | submitlogin

That's not at all what DeepSeek showed.

It showed that a single training run of a particular model could be done with a massive amount of compute, but not so massive that only OpenAI/Google/etc. can do it. Definitely not you nor I, or a university, or a mid-tier company.

In any case, that's small potatoes here. OpenAI spends most of its money not on training, but on inference. Inference is still way too expensive.

Even if inference got cheaper that's not great for OpenAI. It means that other people will launch a similar chat experience for fewer dollars.

Fundamentally, OpenAI needs a moat. And it has nothing.



It has a long list of content partnerships. And by far the highest user base, which means lots of unique training data. If it can succeed in spamming the open Internet enough to crowd out competition through costs and bot filters, it'll have a pretty good data moat.


It has nothing.

The userbase is for an undifferentiated swappable product.

Its data hoard is nothing special compared to any of the other players (Google, Meta, etc.).

And you can see this play out. If it had anything and its data was so good it would be significantly ahead of the competition. Instead everyone is pretty much at the same place moving at the same speed.


It’s too early to call winners. It definitely has more than “nothing” though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: