This is an underappreciated point. The economy would likely be in freefall without AI.
Yes, things look bleak for current college grads. The bitter pill to swallow is that they began college in the boom times of 2021-22, and they saw the college grads of those years walking straight off campus into high-paying jobs which don’t exist anymore. They only existed because of the obscene gobs of money whizzing around the economy post-COVID. Whether the shrinkage is due in part or in whole to AI is in the eye of the beholder. But if we had fallen into a broad-based recession, the numbers would look a lot bleaker. Plenty of companies that could automate away entry level positions with current tech haven’t done so, whether due to organizational inertia or ignorance or whatever. That organizational inertia would’ve been much more easily overcome by a market collapse.
China probably doesn’t accept Dominican pesos, either, and yet you’d be hard pressed to say that somebody with 100 billion Dominican pesos just has some random numbers. If you can exchange something for another form of value, then it has value. I think the trouble here is that there’s just nobody out there who would actually give you $100 billion worth of value for this particular asset. At least not as a lump sum.
I tend to agree, but for the sake of argument: it’s possible that he’s such a true believer that he’d see cashing out as a betrayal. Alternatively, he might understand that cashing out would significantly aid the (clearly large number of) people engaged in unmasking him. Further, he may simply realize that liquidating holdings of his size would drastically alter the market in ways that could end with the whole thing coming apart.
This is a limitation of the training data. If you were uncertain about something, you wouldn’t write a book about it. The kinds of people you’re talking about tend to generate far more text in their lives than others, because they can spend more time generating - writing books, blogposts, whatever - and less time thinking and working and actually doing things. The models never say they’re uncertain because we never say we’re uncertain, or at least we don’t write it down anywhere.
Do you have any recommendations for colorization tools? I agree that all of the popular image models subtly tweak faces, it is very uncanny when working with pictures of people I knew before they passed. In a pre-GPT age, there were some good but not great colorization tools, and as far as I can tell you can’t get better-than-2020 performance unless you’re willing to get your expression adjusted or your eyebrows redone.
The difference is switching costs and the viability of alternatives. Even open source models are only a few months behind the frontier labs, which is a long time in tech but practically no time at all in the eyes of a business consumer. At best, one of the frontier labs will survive and get to flex its hegemonic muscle. But billions and billions of dollars worth of investments still get wiped out in that scenario, which I would still qualify as the bubble popping. This would be doubly true if the winner winds up being Google or Microsoft.
But this is exactly the problem - we have to take it on faith that inference is profitable because nobody actually knows. It’s hard to even define what that would mean, and while I am suspicious of claims that frontier lab CEOs are just out-and-out liars or bad people, defining and calculating the real cost of inference would be time- and labor-intensive in its own right and there is no strong incentive to do it other than “tech reporters are curious.” Until the IPO, we just won’t know.
A lot of people know. A lot of insiders have been saying tokens are profitable. Is there a conspiracy theory for everyone to lie? Including OpenAI, Anthropic CEOs, employees, Cursor management, inference providers of Chinese models?
Profitable on what basis? They generate more revenue than the cost of electricity? Does that factor in the cost to service the massive, multi-layer cake of debt that was necessary to even begin to serve inference in the first place - not from a training perspective but from a hardware and facilities perspective?
I’m not talking about training costs. I’m talking about startup costs. You have to pay for GPUs (or to rent data centers). You have to pay for the electricity that runs those data centers, and in a lot of cases these frontier labs are building the data centers on credit, so you need to pay for the construction, the materials, etc. If it was as simple as “running the GPUs costs less than we charge for it,” I might be inclined to agree. But the GPUs don’t just appear by magic.
Right now, the demand is far more than supply for GPUs. Every cloud company is saying they're leaving money on the table because they don't have enough compute to serve the demand.
It seems like you're arguing that the bubble is going to collapse soon, like the author? How can it collapse when the demand is so much bigger than supply? Do you think the demand is fake? Or that AI will stop making progress from here on out?
The demand is real. The tech is real. The economics are completely unsustainable. Switching costs and barriers to entry are too low, operating costs are too high. And if the tech improves, it actually makes it even easier for competitors to swoop in and take market share. Not long ago, an agent that was 80% as good as SOTA was not usable. A year from now, an agent that is 80% as good as SOTA will be better than the best agent is today. We have it on good authority that today’s agents are very good, very useful. Why bother paying full price?
This is deeply ironic in a way. Because the whole premise of AI labor replacement is that AI does not need to be better than human labor, it just needs to be cheaper with acceptable performance. But the same is true one step down: discount AI doesn’t need to be better than bleeding-edge AI, it just needs to be cheaper with acceptable performance.
The winner will be the last man standing. The company which is able to continue subsidizing tokens after all others have been forced to throttle and raise prices. Soak up the market share, wait for your rivals’ investors to get cold feet, then raise prices the morning after the last one folds. (It’s Google it’s always been Google)
Not sure it qualifies as an “LLM prediction,” but he was adamant that Nvidia would not come through with the $100 billion funding round, and sure enough they did not.
Yes, things look bleak for current college grads. The bitter pill to swallow is that they began college in the boom times of 2021-22, and they saw the college grads of those years walking straight off campus into high-paying jobs which don’t exist anymore. They only existed because of the obscene gobs of money whizzing around the economy post-COVID. Whether the shrinkage is due in part or in whole to AI is in the eye of the beholder. But if we had fallen into a broad-based recession, the numbers would look a lot bleaker. Plenty of companies that could automate away entry level positions with current tech haven’t done so, whether due to organizational inertia or ignorance or whatever. That organizational inertia would’ve been much more easily overcome by a market collapse.
reply