Anyone who's paid attention to the last 15+ years of tech and business knows that it's all about capture and extraction. All the feel-good language about "democratizing" tech or "making the world more open and connected" or "don't be evil" is just a smokescreen for people who want to bring about modern feudalism.
It's hard to see AI as anything but the latest accelerant for that.
I think things are pretty clear. I don’t know when the markets will agree, sadly.
We do know technological advancements will leave the data centers as stranded assets. There’s not enough money in the most optimistic revenue projections to pay for them, and models are simultaneously getting better and cheaper to operate.
Adobe (and similar companies) will either improve or be replaced by vibe coding. I think the assumption a lot of wall street and management is making is that Adobe can replace itself with vibe coding and vibe customer support, and then not be simultaneously out-innovated by a few dozen companies founded by folks they laid off.
Local inference is 6-12 months behind SOTA. If that holds, you can have a 2029 SOTA locally on a Rapberry Pi 8, or 2030 SOTA for $500/month (in 2026 dollars). If 2030 SOTA is qualitatively better at that point, then we’ll be way past AGI, and the economy will be unrecognizable.
It is basically impossible for AI software improvements to devalue the AI compute investments.
It's the other way around, software improvements make the hardware more valuable. Let's say that one unit of compute can generate one unit of value. As the software improves on any of the principal axes (cheaper cost for same quality, or new capabilities that you could previously not get for any price), that same unit of compute will produce more value.
What would threaten those compute investments? Basically order of magnitude improvements in the hardware, but that kind of thing will take longer to happen than the projected lifetime of the hardware. (Or the demand for AI evaporating, but that tends to be an issue of faith that is hard to have a useful discussion on.)
That's assuming all existing LLM investments divided by the all existing LLM usage is net valuable as baseline. But if that is not yet like that, then software improvements may or may not bring those investments over the valuable threshold.
Exactly, my view is intellectually honest because it's falsifiable. I would love to live in a world where tech largely respects and empowers end-users instead of trapping them in engineered dependency. Tech companies just need to act humanely.
I'm not sure I see that with the big tech I'm an end user of, the biggest being Google. I get free search, email, youtube. It's provided a lot of value for me and never really caused problems at that end.
I think the downsides of Google are more driving competitors and companies that have to pay them out of business, like for example the online travel business has suffered because they need to pay a lot to get any customers.
I'm not sure what the answer is. Maybe some monopoly laws that make their service worse so others can compete?
>Apps may change and often experience “enshittification.” Hardware can break but non-connected hardware is otherwise unlikely to change.
This is a huge part of it. Connected apps are never complete. They can't be. They must always evolve in pursuit of the marginal user, eventually betraying their core audience.
An unconnected single-purpose device doesn't need to make anyone happy other than the user who owns and operates it, which makes it more valuable for the user in the long-run.
It's the tools. Friction and barriers aren't bad things. When you have a CD player, there's a higher barrier to switching from "listening to music" to "doing something else."
It's something I noticed in myself when I switched from streaming services to a curated local library. I actually listen to entire albums and savor them instead of jumping around from one infinite content firehose to another. Streaming is convenient, but the friction of maintaining a local library makes it meaningful.
College is wildly useful for motivated students: the ones who go out of their way to pursue opportunities uniquely available to them like serving as TAs, doing undergrad research, rising up the ranks in clubs and organizations, etc. They graduate not just with a credential but social capital. And it's that social capital that shields you from ChatGPT.
College for the "consumer" student isn't worth much in comparison.
Social affinity and reputation represent winning strategies that have served humans very well since the dawn of time. It shouldn't be surprising that they continue to be extremely effective even (or perhaps especially) in the age of AI.
Nepotism is because ‘what is the point of doing all this’ - aka passing things on to family.
It also enables a degree of aligned interests between what could otherwise be hard to align parties (trust, like you mention), but that not why someone gets a big name acting slot, or gets put on the board of a friends company.
Nepotism entangles organizational interests with personal interests, in both good and bad ways. It means that someone may hire a friend or family member because they know they're a) competent enough for the job, and b) they actually, personally know them, which significantly reduces a risk of the hire turning out bad, relative to a stranger with equal or better credentials. But it also means that someone may hire a friend or family member because they're trading favors, which is bad for the organization[0].
I suppose in practice the latter might be more common - I'd guess it could be the whole idea has structural dynamics similar to "the market for lemons". I haven't spent much time thinking about it and researching the problem in depth, so I can't say.
--
[0] - And may or may not be bad for the local community. I suppose the larger problem for organizations is simply that they're designed to be focused, and need to maintain alignment of incentives across the org chart. Nepotism is a threat because it attaches new edges to the org chart - edges that lead to much more complex and fuzzy graphs of family and community relationships, breaking the narrow focus that makes organizations work.
>that have served humans very well since the dawn of time.
Except none of this scales in the modern world beyond flat small orgs in homogenous high trust cultures, basically modern tribes.
If you're a large org with diverse people from everywhere and you empower everyone down the ladder to hire the people they trust, they'll just end up gaming the system or hiring their friends and family and the org fails from nepotism, corruption and cronyism.
It's not like we don't have enough examples of this happening everywhere in the world, and why most places have official hiring policies against this behavior, or policies to obfuscate connections from the hiring pipeline to make sure people get in exclusively on merit.
It's also why socialism is only financially viable in small homogeneous communities (like the Amish for example) where everyone adheres to the social contract of contributing to society more than they take out, and is kept accountable by the ingroup to be honest, but fails at a nation level where everyone including the government in charge of managing it tries to defraud it or game the system in their favor taking out more than they contribute, leading to constant budget deficit and ultimately collapse (see EU state pension systems)
But yes, fully eliminating nepotism and cronyism via rules and laws is nearly impossible due to human own-group bias, so networking will always be a huge asset.
Although I might know a solution, hear me out. I have fond memories of being part of this amazing private torrent tracker back in the day, that was 100% invite only, and the way the community was kept honest and accountable to the spirit and the rules, was that every person was responsible for the people they invite, so if their invites would commit a bannable offense, their parent who invited them would also got banned, meaning people would be very selective with their invites, biasing more towards meritocracy rather than nepotism or selling their invites online for cash which was common back then. Feels like something that could scale IRL as well. You hire your friend that turns out to be a shit employee, you're out the door along with him.
Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.
That's a small part on why people became more cynical of tech over the decades. At least with the internet there were large efforts to try and nail down security in the early 00's. Imagine if we instead left it devolve into a moderator-less hellscape where every other media post is some goatse style jump scare.
That's what it feels like with AI. But perhaps worse since companies are lobbying to keep the chaos instead of making a board of standards and etiquette.
It's hard to see AI as anything but the latest accelerant for that.
reply