I don't understand this thinking as a computer programmer. My whole life has been about getting a computer to do work so humans don't have to anymore. Every single piece of software written is supposed to take away work from someone.
Do you feel this way about every automation you create? I do know some old school sys admins who felt this way about a lot of infrastructure automation advancements, and didn't like that we were creating scripts and systems to do the work that used to be done by hand. My team created an automated patching system at a job that would automatically run patching across our 30,000 servers, taking systems in and out of production autonomously, allowing the entire process to be hands free. We used to have a team whose full time job was running that process manually. Did we take their jobs by automating it?
Sure, in a sense. But there was other work that needed to be done, and now they could do it.
The whole reason I like programming and computers and technology is precisely because it does things for us so we don't have to do it. My utopia is robots doing all the hard work so humans can do whatever we want. AI is bringing us one step closer to that, and I would rather focus on trying to figure out how we can make sure the whole world can benefit from robots taking our jobs (and not just the rich owners), rather than focus on trying to make sure we leave enough work for humans to stay busy doing shit they don't actually want to do.
The problem is that people are holding AI wrong. They're using AI as the engine of their solution without realising the true solution:
Use AI to create the engine. After that running the engine itself costs as much as keeping the computer running it online. No API costs for 3rd party LLM providers needed.
I can understand skepticism to a degree, and even fundamentally believing that AI is bad for all sorts of reasons, but I am becoming more and more perplexed at the certainty behind statements like this one. How are you so certain that AI development is this doomed? It just hasn't matched my experience at all, and I wonder what your experience is that has driven you to this level of certainty about the certain doom of AI coding?
Is it just a philosophical belief that AI is morally bad? Or have you actually used AI to build things and feel confident that you have explored the space enough to come to such a strong conclusion?
I have been writing code every day for over 30 years, and have been doing it professionally for over 20. I have seen fads come and go, and I have seen real developments that have changed the way I do what I do numerous times. The more experience and the more projects I create with AI, the more certain I am that this is a lasting and fundamental change to how we produce software, and how we use computers generally. I have seen AI get better, and I have seen myself get more proficient at using it to get real work done, work that has already been tested with real world, production, workloads.
You can hate that it is happening, and hate the way working with AI feels, but that doesn't mean it is not providing real value for people and doing real work.
I dont know any serious engineers thay are doing real work with AI agents. I know some that are building features for web applications and just punching a clock, but I don't think that constitutes real work or provides much value to the world.
I like thinking, solving problems and typing out code myself. Im going to keep putting tons of care into my craft and I promise I'll have more impact than the guy running 3 agents to build the 500th version of some web concept.
Rolex has a much bigger impact on the world than white label mass manufacturers in China.
It is real work, just 90% of it is either net negative for society or provides nuetral value. Most web applications that are piling on features now because they have agents, are piling on features that we never needed in the first place, hence why they weren't prioritized previously. Junk junk junk.
What makes this better/different than spec-kit? It seems to have a very similar philosophy. I wonder if they could work together? Or would they just be duplicative?
Everything you say is all possible, and in theory I agree with you.
However, I have been using spec-kit (which is basically this style of AI usage) for the last few months and it has been AMAZING in practice. I am building really great things and have not run into any of the issues you are talking about as hypotheticals. Could they eventually happen? Sure, maybe. I am still cautious.
But at some point once you have personally used it in practice for long enough, I can't just dismiss it as snake oil. I have been a computer programmer for over 30 years, and I feel like I have a good read on what works and what doesn't in practice.
We can build all the scaffolding around but I assure you that the LLMs aren't perfect rule following machines is the fundamental problem here and that would remain.
Give it a few more months and I'm sure you'll see some of what I see if not all.
I'm saying all the above having all sorts of systems tried and tested with AI leading me to say what I said.
I have been doing this for 6 months or so now, and I am not sure that even if you have a lot more experience than me that it would make your assessment more accurate, since that just means you have more experience with prior generations of the models. What I have experienced is that the AI has been getting better and better, and is making fewer and fewer mistakes.
Now, part of that is my advancements as well, as I learn how to specify my instructions to the AI and how to see in advance where the AI might have issues, but the advancements are also happening in the models themselves. They are just getting better, and rapidly.
The combination of getting better at steering the AI along with the AI itself getting better is leading me to the opposite conclusion you have. I have production systems that I wrote using spec-kit, that have been running in production for months, and have been doing spectacularly. I have been able to consistently add the new features that I need to, without losing any cohesion or adherence to the principals i have defined. Now, are there mistakes? Of course, but nothing that can't be caught and fixed, and not at a higher rate than traditional programming.
i think it depend on your goals and also your preference / expectation how your experience with LLMs is. i dont mind if they hallucinate. even if i have mental model of code i wont write it myself perfectly either.
the only downside i see is getting out of practice, which is why for my passion projects i dont use it. work is just work and pressing 1 or 2 and having 'good enough' can be a fine way to get through the day. (lucky me i dont write production code ;D... goals...)
While I agree with the premise, I do wonder how you can write a law that would stop the behavior we want to stop without hurting beneficial features or allowing the law to be too easily bypassed.
How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?
> How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?
I don't know how you'd write it in a law either, but if you're in a meeting at your tech company, and the product owner or tech lead uses language like "We need to get users to do..." and "We need to incentivize..." and "It should be easy to do X and hard to do Y..." then do whatever is in your power to steer/stop. You're not really building a product users want, you're pushing a behavior-modification scheme onto users.
> you're pushing a behavior-modification scheme onto users
In general I think that your comment is reasonable. I just would like to point out that such "behavior-modification" schemes are sometimes introduced for genuinely good and ethical reasons.
For instance, it is in my opinion desirable to make it more difficult for users to delete all their photos by e.g. having to confirm their decision in a dialog first. Because it prevents them from accidentally doing something they might not want to do and which is potentially impossible to revert.
I feel like they will just frame it differently: “Users aren’t getting the full value from product x, so let’s change the workflow to help enable them to get more value with no additional effort” or “Users are losing out on a ton of value by cancelling their subscriptions without realizing what they are losing out on, so let’s implement feature x to make them less likely to mistakenly cancel”
One way is intent. If a company's internal communications show that they're intentionally making it addictive, or worse they know it causes harm, you have the smoking gun. This of course doesn't catch all the abuse, but at least it makes it much harder to do this down an entire reporting chain. They have to get really good at winking.
One famous case was Apple suing Samsung over patents. Hard to prove until internal comms surfaced showing intent to copy the iPhone.
Yeah I've done those trainings. That's expected. Even if people learn to say things without saying them, it's a lot harder to communicate across multiple people. And some people are still loudmouths, like at Samsung evidently.
Agree. My first thought is most people in early days didn’t even want to start using PCs for work to begin with. The businesses generally had to mandate it. I imagine many people are facing this today with AI.
Very simple - force companies into data interoperability. That will allow users to move to competition without any data loss. I.e. nobody actually cares that GitHub is constantly down because you can move your repos to a different git provider or to your own server.
> How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?
For laws like this it always boils down to "I'll know it when I see it" which is such a shockingly poor way to write legislation that I'm flabbergasted it doesn't immediately fail any amount of rudimentary scrutiny. Not to mention the latitude it grants for selective enforcement. It's basically Washington asking (through the Economist) for a leash on platforms that host their critics that they can yank at any time the population gets too rowdy, with the convenient justification that the algorithm is too good and our attention spans are in danger or whatever.
Well, you could look to the gambling market for inspiration and let people voluntarily sign up for a blacklist on that feature.
That would be a lot of extra work for the platforms, but I think the results would be interesting. It amounts to legislating that certain features have to be optional and configurable.
I was managing k8s cluster for a CDN, we had small clusters in each of our ~120 datacenters around the world, plus a number of larger clusters in our redundant back offices.
We couldn't use managed clusters because these were running on our own hardware, and they needed to run on the same infrastructure as the CDN itself.
The point of the local clusters was for workloads that needed to run in each data center, and then multiple clusters in the back office for both compliance and operational reasons.
I would not have used a tool like this, though. We used Rancher to manage our clusters.
The question is whether we feel air travel is as essential to everyday life as busses and trains are.
In other words, do we need to make sure everyone can afford to take a flight somewhere?
Or is air travel a luxury that we can allow the market to set a price for?
Maybe flights are simply too cheap, and we should just allow airlines to fail, which will limit supply enough to bring ticket prices back up to a level that is sustainable for airlines as a business.
Of course, this means that a lot of people are going to be priced out of being able to fly places for non-essential reasons. Which, given the environmental impact, might not be a bad thing, although it will make life very different for most people.
Personally, I'm inclined to drive over fly if I can get there in 8-12 hours or less. Even if it uses up close to a full day for a round trip. I absolutely abhor flying. I'm a bit tall and fat, and I've been stuck on the last row, inside seat with less room in both dimensions while the person in front of me tries to lean back literally dislocating my knee. All because they oversell flights, and your seat selection at purchase time apparently counts for jack squat when you show up at the airport to check in... you really needed to "check in" online the day before, even if you were in meetings until 9pm with no time to actually step aside to do the check-in on your phone that's just a miserable experience in itself, because nobody does accessibility testing with phones and larger text/display sizes.
> The question is whether we feel air travel is as essential to everyday life as busses and trains are.
Anywhere I can get to by train in the USA I can go faster and cheaper by plane. By bus I can go "cheaper" if I ignore the value of my time and the people offering me meth at the bus-stop.
That idea around LLMs reducing the signaling value of certain types of work is very interesting, and I hadn’t really thought about it.
I think about this effect with targeted advertising a lot, every since I read this article about why targeted advertising is so useless for both consumers and advertisers, even when it seems like on the surface it should be better for both: https://zgp.org/targeted-advertising-considered-harmful/
Once it becomes very cheap to do something, it becomes completely useless as a way to differentiate quality from crap.
It is super valuable to have your agent files in version control, both because it is useful to be able to revert to previous state and have your AI know where you are, and because being able to freshly clone a repo and have your AI know everything is very helpful.
Do you feel this way about every automation you create? I do know some old school sys admins who felt this way about a lot of infrastructure automation advancements, and didn't like that we were creating scripts and systems to do the work that used to be done by hand. My team created an automated patching system at a job that would automatically run patching across our 30,000 servers, taking systems in and out of production autonomously, allowing the entire process to be hands free. We used to have a team whose full time job was running that process manually. Did we take their jobs by automating it?
Sure, in a sense. But there was other work that needed to be done, and now they could do it.
The whole reason I like programming and computers and technology is precisely because it does things for us so we don't have to do it. My utopia is robots doing all the hard work so humans can do whatever we want. AI is bringing us one step closer to that, and I would rather focus on trying to figure out how we can make sure the whole world can benefit from robots taking our jobs (and not just the rich owners), rather than focus on trying to make sure we leave enough work for humans to stay busy doing shit they don't actually want to do.
reply