It's coding at 10-20x speed, but tangibly this is at 1.5-2x the overall productivity. The coding speed up doesn't translate completely to overall velocity yet.
I am beginning to build a high degree of trust in the code Claude emits. I'm having to step in with corrections less and less, and it's single shotting entire modules 500-1k LOC, multiple files touched, without any trouble.
It can understand how frontend API translates to middleware, internal API service calls, and database queries (with a high degree of schema understanding, including joins).
(This is in a Rust/Actix/Sqlx/Typescript/nx monorepo, fwiw.)
I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
The management has decided that the latter is preferable for short term gains.
> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.
Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.
Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
The problem is not that it can’t produce good code if you’re steering. The problem is that:
There are multiple people on each team, you can not know how closely each teammate monitored their AI.
Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.
This produces an incentive to not actually care about the quality. Which will cause issues down the road.
I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.
But for large programs that will require maintenance for years, these things can be dangerous.
> You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.
I have a small set of steps that I follow to really boost my productivity and get the speed advantage.
(Note: I am talking about AI-coding and not Vibe-coding)
- You give all the specs, and there are "some" chances that LLM will generate code exactly required.
- In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors.
- This will definitely increase speed 2x-3x, but we still need to review everything.
- Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem
1. Figure out a potential solution
2. Make a hacky POC script to verify the proposed solution actually solves the problem
3. Design a decently robust system as a first iteration (that can have bugs)
4. Implement using AI
5. Verify each generated line
6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.
WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.
But who's going to be buying any of these products if everyone is out of a job? Other rich people?
Sure I assume there's a good market there, luxury yachts exist after all, but what is a company like Netflix going to do when people are too poor to even afford the streaming services that cost 10 bucks a month?
Not to get conspiratorial, but the only logical thing for me here is that They want as many of the plebs dead as possible so that the remainder of us are beholden to them and their money, once they own all the AI factories.
I can write a spec for an entirely new endpoint, and Claude figures out all of the middleware plumbing and the database queries. (The catch: this is in Rust and the SQL is raw, without an ORM. It just gets it. I'm reviewing the code, too, and it's mostly excellent.)
I can ask Claude to add new data to the return payloads - it does it, and it can figure out the cache invalidation.
These models are blowing my mind. It's like I have an army of juniors I can actually trust.
I'm not sure I'd call agents an
army of juniors. More like a high school summer intern who has infinite time to do deep dives into StackOverflow but doesn't have nearly enough programming experience yet to have developed a "taste" for good code
In my experience, agentic LLMs tend to write code that is very branchy with cyclomatic complexity. They don't follow DRY principles unless you push them very hard in that direction (and even then not always), and sometimes they do things that just fly in the face of common sense. Example of that last part: I was writing some Ruby tests with Opus 4.6 yesterday, and I got dozens of tests that amounted to this:
x = X.new
assert x.kind_of?(X)
This is of course an entirely meaningless check. But if you aren't reading the tests and you just run the test job and see hundreds of green check marks and dozens of classes covered, it could give you a false sense of security
> In my experience, agentic LLMs tend to write code that is very branchy with cyclomatic complexity
You are missing the forest for the trees. Sure, we can find flaws in the current generation of LLMs. But they'll be fixed. We have a tool that can learn to do anything as well as a human, given sufficient input.
LLMs have been a thing for about three years now, so you can't have been hearing this for very long. In those three years, the rate of progress has been astounding and there is no sign of slowing down.
> You do you, but increasing taxes to build products to replace products built by private enterprise sounds like a 180 degree opposite of what Europe needs to prosper.
Shhh, don't tell them.
(Kidding, of course.)
The best solution is skin-in-the-game, for-profit enterprise coupled with rigorous antitrust enforcement.
Companies will go a million times faster than open source. They're greedy and will tear the skin off of inefficiencies and eat them for lunch. That's what they do. Let the system of capitalism work for you. It's an optimization algorithm. One of the very best.
But when companies get too big and start starving off competition, that's when you need to declaw them and restore evolutionary pressure. Even lions should have to work hard to hunt, and they should starve and die with old age to keep the ecosystem thriving.
> The best solution is skin-in-the-game, for-profit enterprise coupled with rigorous antitrust enforcement.
Don't we have enough examples showing that this simply cannot work long-term, because the for-profit enterprises will _inevitably_ grow larger than the government can handle through antitrust? And once they reach that size, they become impossible to rein in. Just look at all the stupid large american corporations who can't be broken up anymore because the corporation has the lobbying power and media budget to make any attempt to enforce antitrust a carrier killer for a politician.
I think it's very myopic to say that corporate structure is the "best solution".
It seems like you have an unfalsifiable belief. If one side raises more money and wins, it because of the money. If one side raises more money and loses, it is still the money because the other side spend it more effectively.
And the fact that a 3rd party supports an opponent does not kill any politician's career. Biden retired by himself, following his own party's pressure. And Harris is still around, I believe.
Please forgive me for being blunt, I want to emphasize how much this strikes me.
Your post feels like the last generation lamenting the new generation. Why can't we just use radios and slide rules?
If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
There's so much plumbing and refactoring bullshit in writing code. I've written years of five nines high SLA code that moves billions of dollars daily. I've had my excitement setting up dev tools and configuring vim a million ways. I want starships now.
I want to see the future unfold during my career, not just have it be incrementalism until I retire.
I want robots walking around in my house, doing my chores. I want a holodeck. I want to be able to make art and music and movies and games. I will not be content with twenty more years of cellphone upgrades.
God, just the thought of another ten years of the same is killing me. It's so fucking mundane.
I think my take on the matter comes from being a games developer. I work on a lot of code for which agentic programming is less than ideal - code which solves novel problems and sometimes requires a lot of precise performance tuning, and/or often has other architectural constraints.
I don't see agentic programming coming to take my lunch any time soon.
What I do see it threatening is repetitive quasi carbon copy development work of the kind you've mentioned - like building web applications.
Nothing wrong with using these tools to deal with that, but I do think that a lot of the folks from those domains lack experience with heavier work, and falsely extrapolate the impact it's having within their domain to be applicable across the board.
I knew nothing about game development a few months ago. Now I've built a simple godot game. I'm sure the game is all pretty common (simple 2d naval combat game) but it's still impressive that a couple claude/gemini/codex cli sessions spit out a working game (admittedly, I'm not a professional artist, so THAT part of it has been painful since I can't rely on generative AI to do that, I have to do it myself with aesprite. But maybe a professional artist would know HOW to prompt for the artwork)
Agentic programming still needs devs/engineers. It's only going to take your lunch if you let it. And by that, I mean the FUD and complete refusal to give good faith attempts to use the ai/llm tools.
> Your post feels like the last generation lamenting the new generation.
> The future is exciting.
Not the GP, but I honestly wanted to be excited about LLMs. And they do have good uses. But you quickly start to see the cracks in them, and they just aren't nearly as exciting as I thought they'd be. And a lot of the coding workflows people are using just don't seem that productive or valuable to me. AI just isn't solving the hard problems in software development. Maybe it will some day.
> If you've ever enjoyed the sci-fi genre, do you think the people in those stories are writing C and JavaScript?
To go off the deep end… I actually think this LLM assistant stuff is a precondition to space exploration. I can see the need for a offline compressed corpus of all human knowledge that can do tasks and augment the humans aboard the ship. You’ll need it because the latency back to earth is a killer even for a “simple” interplanetary trip to mars—that is 4 to 24 minutes round trip! Hell even the moon has enough latency to be annoying.
Granted right now the hardware requirements and rapid evolution make it infeasible to really “install it” on some beefcake system but I’m almost positive the general form of moores law will kick in and we’ll have SOTA models on our phones in no time. These things will be pervasive and we will rely on them heavily while out in space and on other planets for every conceivable random task.
They’ll have to function reliably offline (no web search) which means they probably need to be absolutely massive models. We’ll have to find ways to selectively compress knowledge. For example we might allocate more of the model weights to STEM topics and perhaps less to, I dunno, the fall of the Roman Empire, Greek gods or the career trajectory of Pauly Shore. the career trajectory of Pauly Shore. But perhaps not, because who knows—-maybe a deep familiarity with Bio-Dome is what saves the colony on Kepler-452b
> Your post feels like the last generation lamenting the new generation [...] There's so much plumbing and refactoring bullshit in writing code [...] I've had my excitement
I don't read the OP as saying that: to me they're saying you're still going to have plumbing and bullshit, it's just your plumbing and bullshit is now going to be in prompt engineering and/or specifications, rather than the code itself.
I want to live forever and set foot on distant planets in other galaxies.
Got a prescription for that too?
I've made films for fifteen years. I hate the process.
Every one of my friends and colleagues that went to film school found out quickly that their dreams would wither and die on the vine due to the pyramid nature of studio capital allocation and expenditure. Not a lot of high autonomy in that world. Much of it comes with nepotism.
There are so many things I wish to do with technology that I can't because of how much time and effort and energy and money are required.
I wish I could magic together a P2P protocol that replaced centralized social media. I wish I could build a completely open source GPU driver stack. I wish I could make Rust compile faster or create an open alternative to AWS or GCP. I wish for so many things, but I'm not Fabrice Bellard.
I don't want to constrain people to the shitty status quo. Because the status quo is shitty. I want the next generation to have better than the bullshit we put up with. If they have to suffer like we suffered, we failed.
I want the future to climb out of the pit we're in and touch the stars.
Computing technology always becomes cheaper and more powerful over time. But it's a slow process. The rate of improvement for LLMs is already decreasing. You will die of old age before the technology that you seem to be looking for arrives.
Our astrophysicists don't even know why the universe is expanding, don't know that Lambda CDM is correct, don't know if things are universally consistent, yet we're so damned sure this is it.
We don't even know that this isn't a simulation. Not non-falsifiable, sure. But we're convinced we're bound to this solar system with our crude tools and limits of detection.
One new instrument could upset our grand understanding and models. Maybe we should wait until they get better hardware to marry ourselves to their prognostications of the end of time.
During the postwar years of plenty, people stopped dreaming. We had bold dreams before WWII, but people stopped looking at how far we'd come and started comparing themselves to everyone else. We had no mortal enemy, tremendous wealth, and "keeping up with the Joneses" became the new operating protocol.
We have more than we did in the past. The manufacturing wealth of 1940-1970 was a fluke. The trade wealth of 1980-2020 was a fluke. We were upset over an unfair advantage that won't last forever. Even today we're still better off than a hundred years ago, yet everyone focuses on how bad things are.
Maybe a return to hardship will make us dream again.
It’s well understood that the expansion of the universe is not “due to general relativity”. General relativity does explain some details of that expansion.
The expansion of the universe is largely due to the impulse provided by the Big Bang. General Relativity does attempt to explain some details regarding what’s happened after the Big Bang, but provides no insight into the Big Bang itself. Nor does it offer insight into Dark Energy, a concept that Einstein opposed.
The universe is expanding due to general relativity... in the sense that Einstein literally wrote "... plus the expansion of the universe" into his equations.
https://github.com/storytold/artcraft
It's not like ComfyUI - it focuses on frontier models like Higgsfield or OpenArt do, and it is structurally oriented rather than node graph based.
Here's what that looks like (skip to halfway down the article):
https://getartcraft.com/news/world-models-for-film
reply