I think the evidence that AI is better at knowledge work without a human in the loop... is very limited.
Humans with many agents will be more productive, but the tendency has been for these models is to regress to the mean when it comes to strategic insights.
So far, I think you're right. But the rate of progress just seems so crazy that I'm not seeing any moats that look fundamental. I hope I'm wrong and you're right.
None of modern society and economics was put together accidentally, IMO. It was purposeful, a mix of success & failures, serendipitous, and filled with mixed motives... but that's not quite the same as an accident.
A mix of political scientists, politicians, investors, entrepreneurs, lawyers, judges, scientists, technologists, and economists have tried to mold society to their own theoretical vision for at least 150+ years. Society then reacts to that in both good and bad ways. This distorts the vision, as society changes it to its concerns. And the cycle repeats.
I think of Karl Polyani's The Great Transformation has a great way of looking at the attempts to force "market society" on England in the 1700 and 1800s, and the reaction that all societies exhibit in the face of unconstrained technological or economic change. Both the imposition of change and reaction to it can be violent, it's hard to predict. We've had such a relatively steady state since WW2 in the developed nations that we're not used to this cycle.
Accidentally is the wrong word, but considering it was never done before and had some very unusual constraints (large coal supply and coal industry, sufficient centralised state that could provide peace within its borders but had been neutered into compromise with parliamentary middle class, finance centres, maritime trade etc etc) that it was done at all does feel … unplanned?
The image that sticks in my mind the most is the Meiji Emperor in a 1870s photo dressed in a saville row suit and bowler hat. For Japan the most incredible social card to play that says “we are going to be like these foreigners and their secrets to wealth”
Nothing accidental there, but that still leaves visible joins on the Japanese soul.
Peter Drucker identified this phenomenon as the rise of knowledge work as "the means of production" in the 1950s and 1960s. Management (of people, tasks, responsibilities, and disciplines) and knowledge work were the two sides to organizational performance. Drucker felt that "post capitalist society" was the recognition that capital ceased being the primary factor of production. No matter how much capital you throw at a problem, if you can't retain people that know what you're doing, you won't get far.
Knowledge is a unique resource compared to the other traditional factors of economic production (land, labor, and capital). It is often invested in with capital (education and tools), but it is carried with the human, and leaves with them. It is always decaying - knowledge workers should be in constant learning mode, and stale knowledge eventually becomes a drag on performance.
I'd argue the future is about knowledge workers all becoming managers. When you use agentic AI, it has the flavor of the skills of management. Management is "a practice and a liberal art", according to Drucker, one that has been in poor supply for some time. LLMs are have somewhat stale knowledge and require the human, tools, and RAG to freshen it. And LLMs will always regress to the mean. It is pretty good at pattern analysis and starts to get shaky and mediocre with synthesis. It requires very nuanced, and elaborate prompting to shape its token output towards insightful results that aren't a standard answer. For coding exercises, that can be fine, but at high complexity levels, or when dealing with issues of strategy or evaluation, it is a platitude generator and has no unique competitive advantage.
In other words, competent, talented management mixed with knowledge work is the scarcity we are heading towards. This is arguably why you're seeing the rise of "markdown frameworks" that people swear improve performance, it's the beginnings of management scaffolding for AI.
Technical folks struggle with valuing management skills, and I expect this will increase its value and scarcity.
As for "Physical robustness. Strength, perhaps brutality. Competence in physical tasks." I think the robots will be replacing that pretty shortly.
"Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values."
Ehhhhh not really? What about Christianity, where the meek shall inherit the Earth, and love is the core value (putting aside modern day Pharisees and Charlatans that twist the underlying value system)? Or Islam, whose core value is submission to God? While there have been Societies that valued parentage and birth order, that's far from universal.
> "post capitalist society" was the recognition that capital ceased being the primary factor of production. No matter how much capital you throw at a problem, if you can't retain people that know what you're doing, you won't get far.
This leads to the reformulation of knowledge workers as "human capital", and it's hardly post-capitalist. A capitalist society is one where people assemble different forms of capital to produce capital returns that are larger than the sum of the capital inputs, where the possibilities available to you depend on the amount and quality of capital that you have access to. This is all still very relevant when discussing human capital - access to human capital is determined by the quality of your professional networks, whether you decide to be present in geographic talent clusters (i.e. cities as centers of industry), and whether you have sufficient financial capital available in trade.
AI will not transition us to a post-capitalist society. Its promise is solely the ability to replace human capital with other forms: chips and electricity. It does not spell the death of human labor any more than computers and spreadsheets did for accountants.
I had thought that months aren't quite a human construct, they correspond roughly to lunar cycles. Weeks were a way to carve the month up into the four lunar phases per cycle.
Seconds, minutes, hours, etc. are, as you say, all sexagesimal math bias.
You're right about months/weeks, I was imprecise in the post: calendar months (as we now use them) are a weird cultural construct (I mean who would divide a measurement unevenly and though that was a good idea?)
Someone actually mathed out infinite monkeys at infinite typewriters, and it turns out, it is a great example of how misleading probabilities are when dealing with infinity:
"Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10^360,641 observable universes made of protonic monkeys."
Often infinite things that are probability 1 in theory, are in practice, safe to assume to be 0.
So no. LLMs are not brute force dummies. We are seeing increasingly emergent behavior in frontier models.
> It is unsurprising that an LLM performs better than random! That's the whole point. It does not imply emergence.
By definition, it is emergent behavior when it exhibits the ability to synthesize solutions to problems that it wasn't trained on. I.e. it can handle generalization.
Emergent behavior would imply that some other function was being reduced to token prediction. Behaving "better than random" ie: not just brute forcing would not qualify - token prediction is not brute forcing and we expect it to do better, it's trained to do so.
If you want to demonstrate an emergent behavior you're going to need to show that.
I'm all for skeptical inquiry, but "burning all credibility" is an overreaction. We are definitely seeing very unexpected levels of performance in frontier models.
Third things can exist. In other words, you’re implying a false dichotomy between “human computation” and “computer computation” and implying that LLMs must be one or the other. A pithy gotcha comment, no doubt.
Edit: the implication comes from demanding that the OP’s definition must be rigorous enough to cover all models of “computation”, and by failing to do so, it means that LLMs must be more like humans than computers.
Humans with many agents will be more productive, but the tendency has been for these models is to regress to the mean when it comes to strategic insights.
reply