I have an explanation (or rationalization, if you wish) for this.
The AI caused the developer productivity to increase (similar to other two big SW engineering productivity jumps - compilers and open source), which gives them more leverage over employers (capital). Things that you needed a small team to build (and thus more capital) you can now do in a single person.
In the long run, this will mean more software being written, possibly by even larger number of people (shift on the demand curve - as price of SW goes down demand increases). But before that happens, companies have a knee-jerk reaction to this as they're trying to take back control over developers, while assuming total amount of software will stay constant. Hence layoffs. But I think it's shortsighted, the companies will hurt themselves in the long run, because they will lay off people who could build them more products in the future. (They misunderstood - developers are not getting cheaper, it's the code that will.)
I see this view very often being pulled into the debate but demand is not only driven through a (low) cost. Demand obviously cannot grow infinitely so the actual question IMO is when and how do we reach the market saturation point.
First hypothesis is that ~all SWEs will remain employed (demand will proportionally rise with the lower cost of development).
Second hypothesis is some % of SWEs will loose their jobs - over-subscription of SWE roles (lower cost of development will drive the demand but not such that the market will be able to keep all those ~30M SWEs employed).
Third hypothesis is that we will see number of SWEs growing beyond ~30M - under-subscription of SWE roles (demand will be so high and development cost so low that we will enter the era of hyperinflation in software production).
At this point, I am inclined to believe that the second hypothesis is the most likely one.
I agree there is a dichotomy. I personally think AIs are better debaters than humans, at the very least in their ability to make less logical mistakes and have wider knowledge. I would suggest everyone should run their thoughts through an AI to get a constructive critique, it would certainly reduce lot of time wasted.
And I find the decision to "ban" AI slightly ironic, when HN has a disdain (unlike its predecessor Slashdot) for funny or sarcastic comments, which require the reader to think more, rather than having a clear argument handed on a silver platter. I mean, it is what truly human communication is like - deliberately not always crystal clear.
I suspect that HN will eventually be replaced by an AI-moderated site, because it will have more quality content.
There are huge advantages to AI-moderation. TBD what the unintended consequences are. But I think it's worth trying.
I believe banning AI is a temporary solution. Even today it is very hard to tell human from AI. In the future it will be impossible. We are in the Philip Dick future of "Do Androids Dream" (the book, not the movie). Does it matter if we can't tell human from AI? The book proposes that how we feel about the piece we're reading is the only thing that matter. How the piece got created is irrelevant.
I think what would be nice (but won't happen until cost of AI somewhat decreases):
1. Pre-moderation - AI looks at your comment before you submit it, and suggests changes for clarity, factuality and argumentative strength. You can decide whether to accept these (individual) changes or not. It will also automatically flag if it breaks moderation guidelines too much.
2. Discussion summary - AI will periodically edit main debate points and supporting sources into a comprehensive document, which you can further add to with your comment. This will help to steer the discussion and make it easier to consume in the future. It can also make discussions less ephemeral, which is a huge problem.
There's a way to measure "entropy" of a codebase. Take something like the binary lambda calculus or the triage calculus, convert your program (including libraries, programming language constructs, operating system) into it, and measure the size of the program in it in bits.
You can also measure the crossentropy, which is essentially the whole program entropy above minus entropy of the programming language and functions from standard libraries (i.e. abstractions that you assume are generally known). This is useful to evaluate the conformance to "standard" abstractions.
There is also a way to measure a "maximum entropy" using types, by counting number of states a data type can represent. The maximum entropy of a function is a crossentropy between inputs and outputs (treating the function like a communication channel).
The "difference" (I am not sure how to make them convertible) between "maximum entropy" and "function entropy" (size in bits) then shows how good your understanding (compared to specification expressed in type signature) of the function is.
I have been advocating for some time that we use entropy measures (and information theory) in SW engineering to do estimation of complexity (and thus time required for a change).
I kind of agree with the article. AI will make SW engineers (or engineers in general) lot more productive, but you still need someone who translates the fuzzy and potentially conflicting specs into something that can be built. That involves a lot of little decisions on how to resolve contradictions, and that's why formal programming language is used. AI can do it to an extent, but likely it won't get you what you want with less communication.
It's a misunderstanding that AI makes SW engineers less valuable, when the code making is cheaper. This assumes there is some fixed amount of code that the society needs to produce. I think the companies will face a different reality - the code they own ("intellectual property") will become less valuable, but the programmers (who are now effectively promoted to kind of product managers) will become more valuable, as they can now do more (and cause more damage, too).
The innovations of the past, such as compilers and open source, which made programmers more productive, didn't make them obsolete.
That being said, it will take companies (and their owners) some time to accept the new reality - programmers have more power now and it's harder to gatekeep what they work on. So the management of these companies will try to twist it, which will ultimately be counterproductive. The programmers should recognize it and look into some form of social organization - be it unions, professional organization or worker cooperatives. (Distinction of labor vs capital is not a natural law, just like the distinction between lords and peasants isn't god-given.)
For a similar thesis, that the austerity policies are manufactured by the well-off, I recommend Clara Mattei's Capital Order (as well as her YT channel).
I disagree with the article. I think it is always possible to come up with reasonably small theories that capture most of the given phenomena. So in a sense, you don't need complex theories in the form of large NNs (models? functions? programs?), other than for more precise prediction.
For example - global warming. It's nice to have AOGCMs that have everything and the carbon sink in them. But if you want to understand, a two layer model of atmosphere with CO2 and water vapor feedback will do a decent job, and gives similar first-order predictions.
I also don't think poverty is a complex problem, but that's a minor point.
> I also don't think poverty is a complex problem, but that's a minor point.
I'm not sure it's a minor point. I don't think poverty is a "complex" problem either, as that term is used in the article, but that doesn't mean I think it fits into one of the other two categories in the article. I think it is in a fourth category that the article doesn't even consider.
For lack of a better term, I'll call that category "political". The key thing with this category of problems is that they are about fundamental conflicts of interest and values, and that's a different kind of problem from the kind the article talks about. We don't have poverty in the world because we lack accurate enough knowledge of how to create the wealth that brings people out of poverty. We have poverty in the world because there are people in positions of power all over the world who literally don't care about ending poverty, and who subvert attempts to do so--who make a living by stealing wealth instead of creating it, and don't care that that means making lots of other people poor.
Every sedentary society has historically scared its members of the dangers of the nomadic lifestyle, heathens, ...
The implied conclusion being that since our ancestors switched from nomadic to sedentary it must have been preferable, a kind of informal democratic collectively and individually approved choice.
Surely sedentary must have been better, how else could such a transition have been sustained?
Rather easy how else: its perfectly possible for average or mean life quality under sedentary lifestyle to be a net setback compared to nomadic lifestyle, since slavery can't be effectively implemented in a nomadic lifestyle, whereas the sedentary lifestyle creates both the demand for labor (routine monotonous work in the fields) and the means to enable slavery (escaping nomadic tribes under Brownian motion is much easier than escaping from a randomly assigned position deep in a larger sedentary empire, even if you escape the sedentary village, the stable neighbouring village will happily return you to "your owner" so that he would hopefully return the favor if ever he catches one of "their slaves").
It's easy to claim a net improvement in life quality ... by discounting the loss of life quality of the slaves!
Nomadic lifestyle was simply outcompeted by sedentary-enabled slavery!
> even if you escape the sedentary village, the stable neighbouring village will happily return you to "your owner" so that he would hopefully return the favor if ever he catches one of "their slaves")
Tell that to all the people who ran the Underground Railroad in the pre-Civil War US, not to mention all the other ways that Fugitive Slave laws were persistently violated.
I think you are vastly underestimating the benefits of a modern "sedentary" society. But as I pointed out in my other post, if you really don't think they're benefits, then you can simply forgo them. Go and live an off grid subsistence lifestyle. There are people who do that. But of course they don't post on the Internet.
How is someone living a hunter-gatherer subsistence life going to get Internet access? That requires a technological society, which requires a lot of wealth creation way above a subsistence level.
If you're saying that someone might claim they're living a hunter-gatherer subsistence life except when they're not, well, that's just hypocrisy. If you're going to make use of things that require a modern technological society, then you're saying life in a modern technological society is preferable to a hunter-gatherer subsistence life, whether you like it or not. You can't have it both ways.
If you think a subsistence nomadic lifestyle is preferable to a modern "sedentary" one, then how are you able to post here? Subsistence nomads don't have Internet access (to name just one of umpteen things we "sedentary" moderns have access to that they don't). There are ways to live off grid if you really think it's preferable.
Fine. And whatever device you're using to post here just happened to emerge spontaneously from the dirt, instead of being built by the efforts of thousands of people spread all over the world as part of a modern technological society.
Also: where do you get your food? Do you grow it? Or hunt for it in a natural wilderness, untouched by technology, using tools you made yourself, without the benefit of modern technology?
Where do you get your clothes? Do you make them yourself? Out of natural materials that would be there if our modern, technological society did not exist?
I'm going to make a wild guess that the answers to those questions are "no"--that you are relying on sources of food and clothes that also require a modern technological society. Not to mention transportation and whatever else you need to do the things that occupy your day.
So no, you are not living a hunter-gatherer subsistence life. You are taking advantage of the fact that it is possible in a modern technological society to be a homeless bum living under a bridge, without having to do all the things that actual hunter-gatherers living a subsistence life have had to do all through human history to survive.
> I think it is always possible to come up with reasonably small theories that capture most of the given phenomena.
I can write a program (call it a simulation of some artificial phenomenon) whose internal logic is arbitrarily complex. The result is irreducible: the entire byzantine program with all of its convoluted logic is the smallest possible theory to describe the phenomenon, and yet the theory is not reasonably small for any reasonable definition.
That's true but I can still approximate what the system does with a simpler model. For example, I can split states of the system into n distinct groups, and measure transition probabilities between them.
Thermodynamics is a classic example of a phenomenological model like that.
> requires an amount of time approaching the time spent if they had just done it themselves
It's actually often harder to fix something sloppy than to write it from scratch. To fix it, you need to hold in your head both the original, the new solution, and calculate the difference, which can be very confusing. The original solution can also anchor your thinking to some approach to the problem, which you wouldn't have if you solve it from scratch.
Sloppy code that has been around for a while works. It likely has support for edge cases you forgot about. Often the sloppyness is because of those edge cases.
those are different things. Often you don't plan for all the necessary things and so it doesn't fit in - even though a better design evists that would have it fit in neater - but only years latter do you see it and getting there is now a massive effort you can't afford. The result looks sloppy because on hindsight right is obvious
It might unravel intellectual property, just not in a fair way. When capitalism started, public land was enclosed to create private property. Despite this being in many cases a quite unfair process, we still respect this arrangement.
With AI, a similar process is happening - publicly available information becomes enclosed by the model owners. We will probably get a "vestigial" intellectual property in the form of model ownership, and everyone will pay a rent to use it. In fact, companies might start to gatekeep all the information to only their own LLM flavor, which you will be required to use to get to the information. For example, product documentation and datasheets will be only available by talking to their AI.
> How does China incentivize its executives to spend money on redundancy?
IMHO, China has two parallel hierarchies of status - one in the ruling party, other in private sector. So the ruling party can maintain the capitalist competition, and dictate overall industrial policy, which fuels the innovation.
In the West, the rich people owning capital captured the political class, so there is only one status hierarchy now - of capitalists. This hierarchy ossifies and becomes increasingly resistant to competition - why invest into something new when it will most likely cause your individual wealth to decrease?
I think this manifests as the wealthy class increasingly speculating on the rising price of assets and extracting rents from them, rather investing in productive infrastructure. So the neoliberalism (capitalism) is in a sort of tragedy of commons, where wealthy individuals benefit more from this financialization rather than actual production.
The West avoided this in the past by having strong unions and middle class controlling the policy through democracy, which balanced the accumulation of wealth. If China can in the future avoid this (or other) trap, where the status elite ossifies and prevents investment in the interest of whole society, remains to be seen.
Should be, but right now, it isn't. So the name is apt, I am afraid.
reply