they would get bugs on every invocation of the software, not on a new version of the AI. it's equivalent to your compiler have a RAND function in it where it chooses between a billion different options every time it compiles, it's absolutely not equivalent to a compiler having a bug.
weirdly made up scenario. I'm the person in the very first sentence. Tab-completing lines is still dog-shit. The majority of the time it has no clue what I'm going to write. Just because it can now write a lot more stuff doesn't mean it isn't still just as incorrect.
Also, you've set up a huge strawman here. Who are these people saying these things in this order and why is that the argument and not "You need to be reviewing every line of code that gets written and understand it."
> The other change is simpler: I'm doing the design work myself, by hand, before any code gets written. Not a vague doc. Concrete interfaces, message types, ownership rules.
That’s the hard part of coding. If you have an architecture then writing the code is dead simple. If you aren’t writing the code you aren’t going to notice when you architected an API that allows nulls but then your database doesn’t. Or that it does allow that but you realize some other small issue you never accounted for.
I do not know how you can write this article and not realize the problem is the AI. Not that you let it architect, but that you weren’t paying attention to every single thing it does. It’s a glorified code generator. You need to be checking every thing it does.
The hard part of software engineering was never writing code. Junior devs know how to write code. The hard part is everything else.
Yes, I think there's 2 kinds of developer. Those who think the code is the hard part, and those that don't.
The developers that thing coding is hard are the ones that absolutely love AI coding. It's changed their world because things they used to find hard are now easy.
Those that think coding is easy don't have such an easy time because coding to them is all about the abstractions, the maintainability and extensibility. They want to lay sensible foundations to allow the software to scale. This is the hard part. When you discover the right abstractions everything becomes relatively easy. But getting there is the hard part. These people find AI coding a useful tool but not the crazy amazing magical tool the people who struggle with coding do.
The OP is definitely in the second camp since they could spot and realise the shortcomings of the AI. They spotted the problem, and that problem is that the AI can't do the hard bit.
I'd say there's another camp: the camp of people who know that code isn't the hard part, but that it's still time consuming to write code. AI coding is pretty useful for that, when you can nail the design but you just need a set of hands to implement it.
I'm classing that as the second camp. Because you don't find it hard to do, it's just time consuming. It means you still know what you're doing and you're just using AI as a tool to accelerate your delivery. That's the optimal way to use in my experience if you want to actually deliver well architected software.
I like coding, I just don't particularly enjoy figuring out the framework du jour. The task at hand is interesting, but the part where I need to figure out what are the incantations to have a Qt list with images in it is not. I need a working UI to get the thing done, but the framework stands in my way, requiring me to step away from my task intended task and spend a few hours on understanding QTreeView.
That's where I really enjoy AI currently, because I can get the GUI stuff out of the way much faster and get back to the thing the GUI is for.
Now within the specific problem I'm trying to solve, sure, I enjoy thinking about the abstractions, maintainability and extensibility. That's the part that actually matters. But the Qt UI on top, that's just a visual layer with a structure that was already set in stone, there's no big decisions of interest to make there. Just to figure out how to make it do the thing.
But isn’t AI doing the same thing to project management as to coding?
PMs can now cross reference and organize tickets with just a few keystrokes. Organisational knowledge, business knowledge, design systems and patterns, etc all of it is encoded in LLM consumable artefacts. For PMs it is the same switch - instead of having to do it by hand you direct lower level employees to handle the details and inconsistencies and you just do vibe and vision.
When all of the pieces successfully connect and execute reliably, what is left for humans to do? Just direct and consume?
And AI companies with their huge swaths of data are soon gonna be in the situation of being able to do the directing themselves
Such a person is just pushing a giant pile of cleanup work onto their colleagues. Unless they actually checked, the "cross references" are probably wrong in places or just entirely made up. Lower level employees by definition don't have the experience to correct the more subtle inconsistencies, so you've basically just constructed a high pass filter that lets only the worst failures through. Moreover, you're absolutely guaranteed to lose the respect of those lower level employees--forcing someone else to clean up your sloppy work is just cruel, and people resent being treated cruelly.
this pretty much sums up what i feel about AI currently. It made my life significantly easier for most tasks I already breeze through, yet tasks I used to struggle with are still the equally difficult
What problems are they? I can't really think of any problems where writing the code was the hard part.
There's plenty of times where I don't know what code to write because I've never used a library before. But it's just a page of documentation away. It's not hard, it's just slow and tedious.
I agree with what you're saying, but I think we do have a problem right now with definitions where there's a lot of people basically getting supercharged tab completions or running a chatbot or two in a parallel pane, but still clearly reviewing everything; and on the other side of things is freaking Steve Yegge pitching a whole new editor that lets you orchestrate a dozen or more agents all vibing away on code you're apparently never going to read more than a line or two of: https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas. The second group are not, and those are the ones that I find a bit more worrisome.
> The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas.
I can't speak for others, but I'd go further and say that LLMs allow me to go deeper on the design side. I can survey alternative data structures, brainstorm conversationally, play design golf, work out a consistent domain taxonomy and from there function, data structure and field names, draft and redraft code, and then rewrite or edit the code myself when the AI cost/benefit trade off breaks down.
GP and this is where I stand with it too. Additionally, the cost of "exploring" down a riskier design path and discovering the unknown unknowns is substantially reduced too, which I think ultimately leads to better decision-making on the design side. It's less "let's just stick to the pattern/tools that we know for sure works because we've done it before" and more "here's a vibed up mockup of it working, we can all see how this actually works and the better pattern that it enables".
Obviously technical and design choices have risks beyond just initial implementation, and those have to be considered too (do we trust the dependency, will it still be there in a year, can we get fixes merged upstream), but I think there's significant value in driving down the cost of code sketches involving unfamiliar libraries and tools.
That’s a little bit of a No True Scotsman. Yes there are people who do not review anything; but even people who are reviewing every line from an LLM do not have the same understanding as someone who wrote it themselves.
I’m not making a judgement call about which is better, but it was widely accepted in tech before the advent of LLMs that you just fundamentally lack a sense of understanding as a reviewer vs an author. It was a meme that engineers would rather just rewrite a complicated feature than fix a bug, because understanding someone else’s code was too much effort.
That blog post is surreal. It's like cryptocurrencies and the whole web3 nonsense. Cryptocurrencies basically don't work, so there have been a hundred aimless attempts at fixing self inflicted problems caused by deficiencies of cryptocurrencies with no actual goal that has any impact on the real world.
It's the same thing here. AI has dropped the cost of software development, so developers are now fooling themselves into producing low or zero value software. Since the value of the software is zero or near zero, it doesn't really matter whether you get it right or not. This freedom from external constraints lets you crank up development velocity, which makes you feel super productive, while effectively accomplishing less than if you had to actually pay a meaningful cost to develop something.
Like, what is the purpose of Gas Town? It looks to me like the purpose of Gas Town is to build Gas Town.
> and on the other side of things is freaking Steve Yegge pitching a whole new editor that lets you orchestrate a dozen or more agents all vibing away on code you're apparently never going to read more than a line or two of
I find it useful to not listen to people who just talk.
> The first group are still thinking fairly deeply about design and interfaces and data structures, and are doing fairly heavy review in those areas
I worry about the first group too, because interfaces and data structures are the map, not the territory. When you create a glossary, it is to compose a message, that transmit a specific idea. I find invariably that people that focus on code that much often forgot the main purpose of the program in favor of small features (the ticket). And that has accelerated with LLM tooling.
I believe most of us that are not so keen on AI tooling are always thinking about the program first, then the various parts, then the code. If you focus on a specific part, you make sure that you have well defined contracts to the orther parts that guarantees the correctness of the whole. If you need to change the contract, you change it with regard to the whole thing, not the specific part.
The issue with most LLM tools is that they’re linear. They can follow patterns well, and agents can have feedback loop that correct it. But contracts are multi dimensional forces that shapes a solution. That solution appears more like a collapsing wave function than a linear prediction.
I’ve noticed that agents almost always fail at the planing vs execution stage.
I follow the plan -> red/green/refactor approach and it is surprisingly good, and the plans it produces all look super well reasoned and grounded, because the agent will slurp all the docs and forums with discussions and the like.
Trouble is once it starts working there would inevitably be a point where the docs and the implementation actually differ - either some combination of tools that have not been used in that way, some outdated docs, or just plain old bugs.
But if the goals of the project/feature are stated clearly enough it is quite capable of iterating itself out of an architectural dead end, that is if it can run and test itself locally.
It goes as deep as inspecting the code of dependencies and libraries and suggesting upstream fixes etc. all things that I would personally do in a deep debugging session.
And I’m supper happy with that approach as I’m more directing and supervising rather than doing the drudgery of it.
Trouble is a lot of my team mates _dont_ actually go this deep when addressing architectural problems, their usual mode of operandi is “escalate to the architect”.
This will not end up good for them in the long run I feel, but not sure what they can do themselves - the window of being able to run and understand everything seems to be rapidly closing.
Maybe that’s not super bad - I don’t exactly what the compiler is doing to translate things to machine code, and I definitely don’t get how the assembly itself is executed to produce the results I want at scale - that is level of magic and wizardry I can only admire (look ahead branching strategies and caching on modern cpus is super impressive - like how is all of this even producing correct responses reliable at such a a scale …)
Anyway - maybe all of this is ok - we will build new tools and frameworks to deal with all of this, human ingenuity and desire for improvement, measured in likes, references or money will still be there.
This is what seems to be lost on so many. As someone with relatively little code experience, I find myself learning more than ever by checking the results and what went right/wrong.
This is also why I don't see it getting better anytime soon. So many people ask me "how do you get your claude to have such good output?" and the answer is always "I paid attention and spotted problems and asked claude to fix them." And it's literally that simple but I can see their eyes already glazing over.
Just as google made finding information easier, it didn't fix the human element of deciphering quality information from poor information.
Looking at code looking for errors is a hard thing to do well for a large amount of code. A better approach is to ensure tests cover all the important cases and many edge cases. Looking at the code may still be a good idea but mostly to check the design. I think that once you get Claude to test the code it writes well, trying to find errors in the code is a waste of time. I’ve made the mistake of thinking Claude was wrong many times despite the tests passing just to be humbled by breaking the tests with my “improvements”!
This is the only way for me to use Agents without completely hating and failing at it. Think about the problem, design structures and APIs and only then let AI implement it.
And when you got familiar with the other parts, you realize that writing code is the most enjoyable one. More often than not, you’re either balancing trade offs or researching what factors yoy have missed with the previous balancing. When you get to writing code, it’s with a sigh of relief, as that means you understand the problem enough to try a possible solution.
You can skip that and go directly to writing code. But that meant you replaced a few hours of planning with a few weeks of coding.
Very well said. When I'm working on a hard problem I'll often spend a few weeks sweating details like algorithms, API shapes, wire formats, database schemas, etc. These things are all really easy to change while they're just in a design document. Once you start implementing, big sweeping edits get a lot more difficult. So better to frontload as much of that as possible in the design phase. AI coding agents don't change this dynamic. However all that frontloaded work pays off big when it does come time to implement, because the search space has been narrowed considerably.
The Affinity suite was made free to use, with optional paid "AI" features behind a subscription. The betrayal was probably against the promise of a perpetual license sustained not by subscription.
Yeah this really breaks down when you put the logic up against ANY sort of compliance testing. Ok you don’t meet compliance, your agents have spent weeks on it and they’re just adding more bugs. Now what are you going to do? You have to go into the code yourself. Uh oh.
That control is only held by Amazon with their drm scheme. It’s not held by the publishers or authors. It does nothing to prevent piracy and only helps Amazon consolidate more control.
Don’t comment if you don’t want to actually contribute. How are people supposed to know these things before buying the equipment. What if they’re the only provider in their region? There’s a billion reasons why your comment doesn’t contribute.
"Don't buy their stuff" is exactly the right answer. You need to do your research before you buy big ticket items. It may not be true in every sector, but Deere has plenty of competition.
No, that's not the answer. It only applies to those people who have time and energy to spare to do that reasearch. I'm not talking just farming equipment, but ordinary items such as a vacuum cleaner or printer.
If you're low income, work 2 jobs, single parent, get home at 23:00 broken tired, want a meal but your fridge just broke down and everything is spoiled inside, you don't spend 2 more hours doing reasearch. You clean it, go to bed hungry, call repair in the morning (optional, if your hopes are high), and when they tell you it's not repairable, you get the first new fridge you can afford in a 10 min online search while on the bus/train/tram being late to work.
Self-repair is an average day on a farm.
A farmer that does not research equipment they about to purchase, especially before spending a small fortune, is a fool.
That's like saying you don't bother learning what illnesses your animals or crops may contract, and how to prevent/cure them, because you're not wealthy.
Buying a book and reading it, to the improve your abilities, is time well spent.
Most maintenance on a tractor is not major, and require basic skill and parts. It's the companies that don't want this, they want specialized technicians to come out to replace an oil filter.
I have a 30 year old vacuum cleaner, which I continue to maintain, which mostly amounts to stripping it once every 10 years and cleaning out all the filters that caked up with fine dust. Definitely cheaper to strip it myself one evening, than to pay someone to do it, or purchase a new one. It is like an hour of work for years of service.
It takes twenty minutes to figure out what other people are saying about a product. And even if you don't have time to do research, you develop a feeling for brands over time. I would never buy a lawn mower from Deere, not because this or that lawnmower is a known bad item, but because the company has had a bad reputation for decades.
A tractor can be almost a million dollar item now, and nobody spending a million dollars should be doing so without doing some research.
This 100%. I live rural and my water pump broke. No water means no showers, no dishwasher, no washing machine, and everyone in the family being uncomfortable. Realistically you get whatever the plumbing place has in stock and knows how to install - even if it's not technically the best one for the site.
Do you seriously expect other companies not following suit? People need lawnmowers, so this can quickly turn into the same situation we have with the inkjet printer market.
Nobody is saying you can't relate your experience with this equipment. What we're saying is consumer action is enough to solve this problem. It just takes some time.
There's a certain type of customer that wants the dealer to handle parts and repair. But those guys aren't the lawn mower segment.
I have not been to a grocery store that sells "Deere-brand Large Ag Equipment"[0] - aka $200k-1M John Deere tractors/harvesters/combines - that are the subject of the settlement. Have you?
You're posting in a thread discussing news of a legal outcome that showed that free market competition did not prevent anti-competitive practices and instead required legal/regulatory intervention to solve.
To say that these are "anti-competitive practices" is stretching the phrase beyond all meaning. If you don't like Deere's policies, you can always buy from Case IH or New Holland. There is plenty of competition in farm equipment.
Most can't "always" immediately replace an incredibly expensive business asset that is only retroactively discovered to have been sold under deceptive terms. The free market works well in many instances, but it needs checks to ensure that it remains truly free and not captured by fraudulent actors that harm consumers and society at large.
Well.. farming equipment are high 6 figures 7 pieces of business equipment (the lifetime operating costs are definitely in the 7 figures.) These are owned and operated by people who I would expect to do this type of research and critical thinking. These aren't normie consumers buying everyday appliances or electronics.
However.. farmers are a weird bunch and they are blinded by brand loyalty or will only buy from an "American" company which ironically allowed JD to stomp all over them because of their dominant market position.
The attachment mechanism is usually standardized, so you can just switch between brands.
Nowadays a larger factor might be how close the next dealership/repair shop is. Some things are time critical, and when it breaks in this time, then you don't want to drive hours to have it fixed/get a part/ have a mechanic available.
There are some differences between the brands... And you can always be Clarkson and get a Lamborghini, even when it makes no sense ;-)
As an outsider, that’s literally what I’m doing: paying attention to the reviews. And some people are telling the reviewer to shut up and quit whining, thus encouraging them not to leave the review that I want to be reading.
Make up your mind. Do you want people to read and write reviews, or don’t you?
All great in theory, but in importing farm machinery, you need to take into account servicing options and warranty claims. Would be painful if you need to truck a harvester or even mower interstate for a warranty claim.
And it's not like these things are always available from a source with reviews. Reviews for new models are less likely to cover repair-access issues that will arise in a few years' time.
reply