Hacker News .hnnew | past | comments | ask | show | jobs | submit | stabbles's commentslogin

You're glancing over the fact that mathematics uses only one token per variable `x = ...`, whereas software engineering best practices demand an excessive number of tokens per variable for clarity.

It's also a pretty silly thing to say difficulty = tokens. We all know line counts don't tell you much, and it shows in their own example.

Even if you did have Math-like tokenisation, refactoring a thousand lines of "X=..." to "Y=..." isnt a difficult problem even though it would be at least a thousand tokens. And if you could come up with E=mc^2 in a thousand tokens, does not make the two tasks remotely comparable difficulty.


The other day someone commented on this site that in the age of agentic coding "maintaining a fork is really not that serious of and endeavor anymore." and that's probably the case. I'm sure continuously rebasing "revert birthday field" can be fully automated.

Then the only thing remaining is convincing a critical mass that development now happens over at `Jeffrey-Sardina/systemd` on GitHub.


IMO, the benefits aren't from getting mass adoption of this fork, but actually the opposite, at least ostensibly, because if it were to become "the" systemd, it would then face scrutiny and potential legal threat. This way, the maintainers can be in compliance, the legislators (who if there are any paying attention) can be superficially satisfied, while people can still avoid the antipattern. It's the "brown paper bag" speech from the Wire, basically

At some point people will realize that not having an optional data field might not be worth the effort of indefinitely rebasing a revert and recompiling, since they could just not set the field for their user account by doing nothing

That's overstating things. The biggest piece of infra is PyPI, to which uv is only an interface. They do distribute Python binaries, but that's not very impressive.

So when Charlie Marsh goes on a podcast saying that the majority the complications they face with their work is in DevOps, he's also overstating things?

But you know best it seems!


Overstating complexity justifies funding, and attracts attention.


You would think that Finland's unemployment rate (10%+) would influence its ranking, but that's not the case at all.

As it's selfreporting and it's more about expectations than actual happiness a finnish dude only needs to think that life is just incredible compared to what he sees at the other side of the border to selfreport a 10 in happiness

Could also explain Israel

Nordic countries have better safety nets.

I haven't travelled there but I grew up in Poland and still visit. US feels very capitalistic to me. I feel the pace is slower in Poland. In US I feel the need to produce. Might be just me.


This is how I feel as a Canadian. It's just a border between us, we've got issues of our own but on one side life seems much more transactional and individualistic in a somewhat repulsive way. I'm sure it's not unique to them, and I'm sure it's not uniformly pervasive. I rarely feel like a true foreigner while I'm in the country, but there's just this unsettling feeling of distrust coupled with a drive to consume that I don't feel when I'm north of the border.

Well, that's just inherent in the question which asks someone to imagine the best possible life vs. the worst possible life. In a society with lots of room to grow you aren't at the higher rungs. In a society with no progress possible you're at the top easily.

You really need dedicated types for `int64` and something like `final`. Consider:

    class Foo:
      __slots__ = ("a", "b")
      a: int
      b: float
there are multiple issues with Python that prevent optimizations:

* a user can define subtype `class my_int(int)`, so you cannot optimize the layout of `class Foo`

* the builtin `int` and `float` are big-int like numbers, so operations on them are branchy and allocating.

and the fact that Foo is mutable and that `id(foo.a)` has to produce something complicates things further.


Maybe, but I quoted specific part I was replying to. TS has no impact on runtime performance of JS. Type hints in Python have no impact on runtime performance of Python (unless you try things like mypyc etc; actually, mypy provides `from mypy_extensions import i64`)

Therefore Python has no use for TS-like superset, because it already has facilities for static analysis with no bearing on runtime, which is what TS provides.


Because the python devs weren't allowed to optimize on types. They are only hints, not contracts. If they become contracts, it will get 5-10x faster. But `const` would be more important than core types.

What OP means is that they need to:

1) Add TS like language on top of Python in backwards compatible way

2) Introduce frozen/final runtime types

3) Use 1 and 2 to drive runtime optimizations


Still makes no sense. OP demands introduction of different runtime semantics, but this doesn't require adding more language constructs (TS-like superset). Current type hints provide all necessary info on the language level, and it is a matter of implementation to use them or not.

From all posts it looks like what OP wants is a different language that looks somewhat like Python syntax-wise, so calling for "backwards-compatible" superset is pointless, because stuff that is being demanded would break compatibility by necessity.


That was how the Mojo language started. And then soon after the hype they said that being a superset of Python was no longer the goal. Probably because being a superset of Python is not a guarantee for performance either.

Being a superset would mean all valid Python 3 is valid Python 4. A valuable property for sure, but not what OP suggested. In fact, it is the exact opposite.

The TL;DR: code should be easy to audit, not easy to write for humans.

The rest is AI-fluff:

> This isn't about optimizing for humans. It's about infrastructure

> But the bottleneck was never creation. It was always verification.

> For software, the load-bearing interface isn't actually code. Code is implementation.

> It's not just the Elixir language design that's remarkable, it's the entire ecosystem.

> The 'hard' languages were never hard. They were just waiting for a mind that didn't need movies.


To put it another way: this article isn’t about the AI fluff, it’s about the two sentences at the top the author wrote themselves. ;)

Perhaps we need an AI to human transformer to remove the AI fluff?

It really is AI fluff.

Are people starting to write and talk in this manner, I see so many YouTube videos where you can see a person reading an AI written text, its one thing if the AI wrote it, but another if the human wrote it in the style of an AI.

As someone pointed out to me the way an AI writes text can be changed, so it is less obvious, its just that people don't tend to realise that.


Someone had one of those AI videos on in the background and, I can’t explain it, the ordering of the words is like nails on a chalkboard to me. I’m starting to have a visceral physiological response to AI prose that makes it actually painful to listen to.

The video was a biography about some Olympian, and I could tell the prompt included some facts about her wanting to be a tap dancer as a kid, because the video kept going back to that fact constantly. Every few sentences it would reference “that kid who wanted to be a tap dancer”. By the 6th time it brought up she wanted to be a tap dancer I was ready to scream.


Whenever I see a sentence of the form:

"X isn't A, it's (something opposite A)" I twitch involuntarily.


It's even infecting the highest levels of government:

https://www.pimlicojournal.co.uk/p/mps-are-almost-certainly-...


Man you are bad at TL;DR;-ing, you completely left out the main point article makes comparing stateful/mutating object oriented programming that humans like and pure functional oriented programing that presumably according to author LLMs thrive in.

I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.

I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.

For well-intended open source contributions using GenAI, my current rules of thumb are:

* Prefer an issue over a PR (after iterating on the issue, either you or the maintainer can use it as a prompt)

* Only open a PR if the review effort is less than the implementation effort.

Whether the latter is feasible depends on the project, but in one of the projects I'm involved in it's fairly obvious: it's a package manager where the work is typically verifying dependencies and constraints; links to upstream commits etc are a great shortcut for reviewers.


Unfortunately, LLMs generate useless word salad and nonsense even when working on issues text, you absolutely have to reword the writing from scratch otherwise it's just an annoyance and a complete waste of time. Even a good prompt doesn't help this all that much since it's just how the tool works under the hood: it doesn't have a goal of saying anything specific in the clearest possible way and inwardly rewording it until it does, it just writes stuff out that will hopefully end up seeming at least half-coherent. And their code is orders of magnitude worse than even their terrible English prose.


I don’t think you’re being serious. Claude and GPT regularly write programs that are way better than what I would’ve written. Maybe you haven’t used a decent harness or a model released in the last year? It’s usually verbose, whereas I would try the simplest thing that could possibly work. However, it can knock out but would have taken me multiple weekends in a few minutes. The value proposition isn’t even close.

It’s fine to write things by hand, in the same way that there’s nothing wrong with making your own clothing with a sewing machine when you could have bought the same thing for a small fraction of the value of your time. Or in the same fashion, spending a whole weekend, modeling and printing apart, you could’ve bought for a few dollars. I think we need to be honest about differentiating between the hobby value of writing programs versus the utility value of programs. Redox is a hobby project, and, while it’s very cool, I’m not sure it has a strong utility proposition. Demanding that code be handwritten makes sense to me for the maintainer because the whole thing is just for fun anyway. There isn’t an urgent need to RIIR Linux. I would not apply this approach to projects where solving the problem is more important than the joy of writing the solution.


> Claude and GPT regularly write programs that are way better than what I would’ve written

Is that really true? Like, if you took the time to plan it carefully, dot every i, cross every t?

The way I think of LLM's is as "median targeters" -- they reliably produce output at the centre of the bell curve from their training set. So if you're working in a language that you're unfamiliar with -- let's say I wanted to make a todo list in COBOL -- then LLM's can be a great help, because the median COBOL developer is better than I am. But for languages I'm actually versed in, the median is significantly worse than what I could produce.

So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand.


A lot of computer users are domain experts in something like chemistry or physics or material science. Computing to them is just a tool in their field, e.g. simulating molecular dynamics, or radiation transfer. They dot every i and cross every t _in_their_competency_domain_, but the underlying code may be a horrible FORTRAN mess. LLMs potentially can help them write modern code using modern libraries and tooling.

My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. I think LLMs will be seen as the compiler technology of the next wave of computing.


The difference is that compilers involve rules we can enumerate, adjust, etc.

Consider calculators: Their consistency and adherence to requirements was necessary for adoption. Nobody would be using them if they gave unpredictable wrong answers, or where calculations involving 420 and 69 somehow keep yielding 5318008. (To be read upside-down, of course.)


But thats the point, an llm is a vastly different object to a calculator. Its a new type of tool for better or worse based on probabilities, distributions.

If you can internalise that fact and look at it like having a probable answer rather than an exact answer it makes sense.

Calculators cant have a stab at writing an entire c compiler. A lot of people cant either or takes a lot of iteration anyway, no one one shotted complicated code before llms either.

I feel discussion shouldnt be about how they work as the fundamental objection, rather the costs and impacts they have.


The compilers used to be unreliable too, e.g. at higher optimizations and such. People worked on them and they got better.

I think LLMs will get better, as well.


nice. 3x.


It can certainly be true for several reasons. Even in domains I'm familiar with, often making a change is costly in terms of coding effort.

For example just recently I updated a component in one of our modules. The work was fairly rote (in this project we are not allowed to use LLMs). While it was absolutely necessary to do the update here, it was beneficial to do it everywhere else. I didn't do it in other places because I couldn't justify spending the effort.

There are two sides to this - with LLMs, housekeeping becomes easy and effortless, but you often err on the side of verbosity because it costs nothing to write.

But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.

When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more.


I can agree with that. So essentially: "Claude and GPT regularly write programs that are way better than what I would’ve written given the amount of time I was willing to spend."


How much time and effort are you willing to spend on maintaining that code though? The AI can't do it on its own, and the code quality is terrible enough.


Have you tried the latest models at best settings?

I've been writing software for 20 years. Rust since 10 years. I don't consider myself to be a median coder, but quite above average.

Since the last 2 years or so, I've been trying out changes with AI models every couple months or so, and they have been consistently disappointing. Sure, upon edits and many prompts I could get something useful out of it but often I would have spent the same amount of time or more than I would have spent manually coding.

So yes, while I love technology, I'd been an LLM skeptic for a long time, and for good reason, the models just hadn't been good. While many of my colleagues used AI, I didn't see the appeal of it. It would take more time and I would still have to think just as much, while it be making so many mistakes everywhere and I would have to constantly ask it to correct things.

Now 5 months or so ago, this changed as the models actually figured it out. The February releases of the models sealed things for me.

The models are still making mistakes, but their number and severity is lower, and the output would fit the specific coding patterns in that file or area. It wouldn't import a random library but use the one that was already imported. If I asked it to not do something, it would follow (earlier iterations just ignored me, it was frustrating).

At least for the software development areas I'm touching (writing databases in Rust), LLMs turned into a genuinely useful tool where I now am able to use the fundamental advantages that the technology offers, i.e. write 500 lines of code in 10 minutes, reducing something that would have taken me two to three days before to half a day (as of course I still need to review it and fix mistakes/wrong choices the tool made).

Of course this doesn't mean that I am now 6x faster at all coding tasks, because sometimes I need to figure out the best design or such, but

I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings, and not about the tab auto completion or the quick edit features of the IDEs, but the agentic feature where the IDE can actually spend some effort into thinking what I, the user, meant with my less specific prompt.


> I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings

So you have to burn tokens at the highest available settings to even have a chance of ending up with code that's not completely terrible (and then only in very specific domains), but of course you then have to review it all and fix all the mistakes it made. So where's the gain exactly? The proper goal is for those 500 lines to be almost always truly comparable to what a human would've written, and not turn into an unmaintainable mess. And AI's aren't there yet.


You really do need to try the latest ones. You can’t extrapolate from your previous experiences.


I do not think they are impartial - all I can see is lots of angst.


I feel like we're talking about different things. You seem to be describing a mode of working that produces output that's good enough to warrant the token cost. That's fine, and I have use cases where I do the same. My gripe was with the parent poster's quote:

> Claude and GPT regularly write programs that are way better than what I would’ve written

What you're describing doesn't sound "way better" than what you would have written by hand, except possibly in terms of the speed that it was written.


yeah it writing stuff that's way better than mine is not the case for me, at least for areas I'm familiar with. In areas I'm not familiar with, it's way better than what I could have produced.

no. I'm a pretty skilled programmer and I definitely have to intervene and fix an architectural problem here and there, or gently chastise the LLM for doing something dumb. But there are also many cases where the LLM has seen something that i completely missed or just hammered away at a problem enough to get a solution that is correct that I would have just given up on earlier.

The clanker can produce better programs than me because it will just try shit that I would never have tried, and it can fail more times than I can in a given period of time. It has specific advantages over me.


The verboseness is the key issue as to why LLMed PRs are bad.


> The value proposition isn’t even close.

That's correct, because most of the cost of code is not the development but rather the subsequent maintenance, where AI can't help. Verbose, unchecked AI slop becomes a huge liability over time, you're vastly better off spending those few weekends rewriting it from scratch.


let me translate this for the GP: "you're doing it wrong".


Having reviewed a lot of Ai-written python code, I think it's absolute nonsense.

It never picks a style, it'll alternate between exceptions and then return codes.

It'll massively overcomplicate things. It'll reference things that straight up don't exist.

But boy is it brilliant at a fuzzy find and replace.


if it wasn't so maddening it would be funny when you literally have to tell it to slow down, focus and think. My tinfoil hat suggests this is intentional to make me treat it like a real, live junior dev!


"you literally have to tell it to slow down, focus and think" - This soo much! When I get an unexpected result from claude, I ask it why - what caused it to do such-and-such. After one back and forth session like this putting up tons of guardrails on a prompt, claude literally said "you shouldn't have to teach me to think every session" !!


> When I get an unexpected result from claude, I ask it why - what caused it to do such-and-such.

No LLM can answer this question for you, it has no insight into how or why it outputted what it outputted. The reasons it gives might sound plausible, but they aren't real.


> Claude and GPT regularly write programs that are way better than what I would’ve written.

I’m sorry but this says more about you than about the models. It is certainly not the case for me!


I feel like every person stating things of this nature are literally not able to communicate effectively (though this is not a barrier anymore, you can get a dog to vibe code games with the right workflow, which to me seems like quite an intellectual thing to be able to do.

Despite that, you will make this argument when trying to use copilot to do something, the worst model in the entire industry.

If an AI can replace you at your job, you are not a very good programmer.


Copilot isn't a model. Currently it's giving me a choice of 15 different models. By all evidence, AI is nowhere close to replacing me, but to hear other people tell it, it is weeks or maybe months away.

I'll just wait and see.


Remember when copilot released? It was running some openai thing at the time, now you can choose from many models sure, but if you want a BMW, buy a BMW, don't buy a Nissan with badly strapped on BMW decals.


No, I don't remember when it was released.

I don't want a Nissan or a BMW. This was provided by my employer, and I've been asked to use it. To be honest, I don't even understand how your car analogy applies to any of this.


It does generate word salad (and usefulness depends on the person reading it). If both the writer and the reader share a common context, there's a lot that can be left out (the extreme version is military signal). An SOS over the radio says the same thing as "I'm in a dangerous situation, please help me if you can" but the former is way more efficient. LLMs tend to prefer the latter.


> If an AI can replace you at your job, you are not a very good programmer.

Me and millions of other local yokel programmers who work in regional cities at small shops, in house at businesses, etc are absolutely COOKED. No I cant leet code, no I didnt go to MIT, no I dont know how O(n) is calculated when reading a function. I can scrap together a lot of useful business stuff but no I am not a very good programmer.


> no I dont know how O(n) is calculated when reading a function

   1. Confidently state "O(n)"
   2. If they give you a look, say "O(1) with some tricks"
   3. If they still give you a look, say "Just joking! O(nlogn)"


O(no idea)


>no I dont know how O(n) is calculated when reading a function

This is really, honestly not hard. Spend a few minutes reading about this, or even better, ask a LLM to explain it to you and clear your misconceptions if regular blog posts don't do it for you. This is one of the concepts that sounds scarier than it is.

edit: To be clear there are tough academic cases where complexity is harder to compute, with weird functions in O(sqrt(n)) or O(log(log(n)) or worse, but most real world code complexity is really easy to tell at glance.


its not hard. Accounting isnt that hard either. I just know more business crap than programming


Do you mean you aren't able to use AI to make software?

The thing you fear is the thing that you could just use to improve yourself?

Why fear a shovel?

Also, I never claimed to be a good programmer either. Just don't see the point fearing something that makes it infinitely easier and faster to get work done.


I suspect the value you bring to the table is that you are good enough a programmer to translate the problems of the people you work with into working code.

LLMs can do it somewhat, but it can probably leetcode better than even most of the the people who went to MIT.


So many people and systems have some how merged into just a slathering of spam to everyones senses. It's no longer about truth statements, but just, is this attention-worthy, and most of the internet, it's social media and "people" are going into the no-bin.


My rules of thumb is much shorter: don't.

The open source world has already been ripped off by AI the last thing they need is for AI to pollute the pedigree of the codebase.


Suppose almost all work in the future is done via LLMs, just like almost all transportation is done today via cars instead of horses.

Do you think your worldview is still a reasonable one under those conditions?


But all work isn't done by LLMs at the moment and we can't be sure that it will be so the question is ridiculous.

Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't

This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.

I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.

For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.

For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.


Basically all of my actual programming work has been done by LLMs since January. My team actually demoed a PoC last week to hook up Codex to our Slack channel to become our first level on-call, and in the case of a defect (e.g. a pagerduty alert, or a question that suggests something is broken), go debug, push a fix for review, and suggest any mitigations. Prior to that, I basically pushed for my team to do the same with copy/paste to a prompt so we could iterate on building its debugging skills.

People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. It's clear that doing it by hand would mostly be because you enjoy the process. I expect people that are more focused on the output will adopt LLMs for hobby work as well.


Sounds like a company on the verge of creating a mess that will require a rewrite in a year or so. Maybe an llm can do it.


I suspect this is more true than most people think. Today's bad code will be cleaned up by tomorrow's agents.

The other factor that gets glossed over is that llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility. When I do code with llms, a big part of it is demonstration, i.e. pseudocoding a pattern/structure, asking the model if it understands, and then having it complete the pattern. I've had a lot of success with this approach.


> llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility

Right, this is the kind of discussion we're having on my team: suddenly all of the already good engineering practices like good observability, clear tests with high coverage, clean design, etc. act as a massive force multiplier and are that much more important. They're also easier to do if you prioritize it. We should be seeing quality go up. It's trivial to explore the solution space with throwaway PoCs, collect real data to drive your design, do all of those "nice to have" cleanups, etc. The people who assume LLM = slop are participating in a bizarre form of cope. Garbage in, garbage out; quality in, quality out. Just accept that coding per se is not going to be a profession for long. Leverage new tools to learn more, do more, etc. This should be an exciting time for programmers.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This will not happen until companies decide to care about quality again. They don't want employees spending time on anything "extra" unless it also makes them significantly more money.


> It's clear that doing it by hand would mostly be because you enjoy the process.

This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.


[dead]


Ultimately you always have to trust people to be judicious, but that's why it doesn't make any changes itself. Only suggests mitigations (and my team knows what actions are safe, has context for recent changes, etc). It's not entirely a black box though. e.g. I've prompted it to collect and provide a concrete evidence chain (relevant commands+output, code paths) along with competing hypotheses as it works. Same as humans should be doing as they debug (e.g. don't just say "it's this"; paste your evidence as you go and be precise about what you know vs what you believe).


That's sounds like the perfect recipe for turning a small problem into a much larger one. 'on call' is where you want your quality people, not your silicon slop generator.


I say let people hold this stance. We, agentic coders, can easily enough fork their project and add whatever the features or refinements we wanted, and use that fork for ourselves, but also make it available for others in case other people want to use it for the extra features and polish as well. With AI, it's very easy to form a good architectural understanding of a large code base and figure out how to modify it in a sane, solid way that matches the existing patterns. And it's also very easy to resolve conflicts when you rebase your changes on top of whatever is new from upstream. So, maintaining a fork is really not that serious of and endeavor anymore. I'm actually maintaining a fork of Zed with several additional features (Claude Code style skills and slash commands, as well as a global agents.md file, instead of the annoying rules library system, which I removed, as well as the ability to choose models for sub-agents instead of always inheriting the model from the parent thread; and yes, master branch Zed has subagents! and another tool, jjdag)

That seems like a win-win in a sense: let the agentic coders do their thing, and the artisanal coders do their thing, and we'll see who wins in the long run.


> We, agentic coders, can easily enough fork their project

And this is why eventually you are likely to run the artisanal coders who tend to do most of the true innovation out of the room.

Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.

Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword.


Maybe! Or maybe there is really a competitive advantage to "artisanal" coding.

Personally, I would not currently expect a fork of RedoxOS that is AI-implemented to become more popular than RedoxOS itself.


Indeed, maybe there is. I'm interested to see how it plays out.


"make their own fork which nobody else is interested in because it is personalized to them"

Isn't that literally how open-source works, and why there's so many Linux distros?

Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept.


For a human to be able to do any customization, they have to dive into the code and work with it, understand it, gain intuition for it. Engage with the maintainers and community. In the process, there's a good chance that they'll be encouraged to contribute improvements upstream even if they have their own fork.

Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.

Perhaps the long term steady state will be a goldilocks renaissance of open source where lots of new ideas and contributors spring up, made capable with AI assistance. But so far what I've seen is the opposite. These people just feed existing work into their LLMs, produce derivative works and never bother to engage with the original authors or community.


> Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.

I spend time using my agent to better understand existing codebases and their best practices than I'd ever have the time/energy to do before, giving me a broader and more holistic view on whatever I'm changing, before I make a change.


Okay, but you don't have to - and "efficient" coders won't bother, thus starving the commons.


Well, I would argue that if I didn't spend that time, then even a personal fork that I vibe coded would be worse, even for me personally. It would be incompatible with upstream changes, more likely to crash or have bugs, more difficult to modify in the future (and cause drift in the model's own output) etc.

I always find it odd that people say both that vibe coding has obvious and immediate negative consequences in terms of quality and at the same time that nobody could learn or be incentivized to produce better architecture and code quality from vibe coding when they would obviously face those consequences.


I think that in the long run, AI assisted coding will turn out to be better than handcrafted code. When you pay for every token, and code generation is quick, a clean, low entropy codebase with good test coverage gets you a lot more for your dollar than a dog's breakfast. It's also much easier to fix bad decisions made early on in a project's life, because the machine is doing all of the heavy lifting.

This also lines up with the history of automation in many other industries. Modern manufacturing is capable of producing parts that a medieval blacksmith couldn't dream of, for example. Sure, maybe an artisan can produce better code than an llm now, but AI assisted humans will beat them in the near future if they aren't already producing similar quality output at greater speed, and tomorrow's models will fix the bad code written today. The fact that there's even a discussion on automated vs hand written today means that the writing is almost certainly on the wall.


You mean like I have to pay my compiler to turn high level code into low level code?


Most "artisanal" coders that are complaining are working on the n-1000th text editor, todo list manager, toy programming language or web framework that nobody needs, not doing "true innovation".


I mean, I do open PRs for most of my changes upstream if they allow AI, once I've been using the feature for a few weeks and have fixed the bugs and gone over the code a few times to make sure it's good quality. Also, I'm going to be using the damn thing, I don't want it to be constantly broken either, and I don't want the code to get hacky and thus incompatible with upstream or cause the LLMs to drift, so I usually spend a good amount of time making sure the code is high quality — integrates with the existing architecture and model of the world in the code, follows best practices, covers edge cases, has tests, is easy to read so that I can review it easily.

But if a project bans AI then yeah, they'll be run out of town because I won't bother trying to contribute.


Well at least you, agentic coders, already understand they need to fork off.

Saves the rest of us from having to tell you.


>> but also make it available for others in case other people want to use it for the extra features and polish as well.

this feels like the place where your approach breaks down. I have had very poor results trying to build a foundation that CAN be polished, or where features don't quickly feel like a jenga tower. I'm wondering if the success we've seen is because AI is building on top of, or we're early days in "foundational" work? Is anyone aware of studies comparing longer term structural aspects? is it too early?


I've been able to make very clear, modular, well put together architectural foundations for my greenfield projects with AI. We don't have studies, of course, so it is only your anecdote versus mine.


> We, agentic coders, can easily enough fork their project and add whatever the features

Bold of you to assume that people won’t move (and their code along with it) to spaces where parasitic behaviour like this doesn’t occur, locking you out.

In addition to just being a straight-up rude, disrespectful and parasite position to take, you’re effectively poisoning your own well.


Since when is maintaining a personal patch set / fork parasitic? And in what way does it harm them, such that they should move to spaces where it doesn't happen, as a result? Also, isn't the entire point of open source precisely to enable people to make and use modifications of code if they want even if they don't want to hand code over? Also, that would be essentially making code closed source — do you think OSS is just going to die completely? Or would people make alternative projects? Additionally, this assumes coders who are fine with AI can't make anything new themselves, when if anything we've seen the opposite (see the phenomenon of reimplementing other projects that's been going around).

Additionally, if they accept AI contributions, I try, when I have the time and energy, make sure my PRs are high quality, and provide them. If they don't, then I'll go off and do my own thing, because that's literally what they asked me to do, and I wasn't going to contribute otherwise. I fail to see how that's rude or parasitic or disrespectful in any way except my assumption that the more featureful and polished forks might eventually win out.


Its only parasitic if you are tricking users into thinking you are the original or providing something better. You could be providing something different (which would be valuable) but if you are not, you are just scamming users for your own benefit.


I have no intention of tricking anyone into thinking I'm the original! I do think I offer improvements in some cases, so in cases where the project is something I intend for other people to ever see/use, I do explain why I think it is better, but I also will always put the original prominently to make sure people can find their way back to that if they want to. For example, the only time I've done this so far:

https://github.com/alexispurslane/jjdag


> just like almost all transportation is done today via cars instead of horses.

That sounds very Usanian. In the meantime transportation in around me is done on foot, bicycle, bus, tram, metro, train and cars. There are good use cases for each method including the car. If you really want to use an automotive analogy, then sure, LLMs can be like cars. I've seen cities made for cars instead of humans, and they are a horrible place to live.

Signed, a person who totally gets good results from coding with LLMs. Sometimes, maybe even often.


As someone who enjoys working with AI tools, I honestly think the best approach here might be bifurcation.

Start new projects using LLM tools, or maybe fork projects where that is acceptable. Don't force the volunteer maintainers of existing projects with existing workflows and cultures to review AI generated code. Create your own projects with workflows and cultures that are supportive of this, from the ground up.

I'm not suggesting this will come without downside, but it seems better to me than expecting maintainers to take on a new burden that they really didn't sign up for.


That would only be a world where the copyright and other IP uncertainties around the output (and training!) of LLMs were a solved and known question. So that's not the world we currently live in.


The ruling capital class has decided that it is in their best interest for copyright to not be an obstacle, so it will not be. It is delusional to pretend that there is even a legal question here, because America is no longer a country of laws, to the extent that it ever was. I would bet you at odds of 10,000 to 1 that there will never be any significant intellectual property obstacles to the progress of generative AI. They might need to pay some fines here and there, but never anything that actually threatens their businesses in the slightest.

There clearly should be, but that is not the world we live in.


even if this was true or someday will be (big IF), is it worth looking for valid counter workflows? example: in many parts of the US and Canada the Mennonites are incredibly productive farmers and massive adopters of technology while also keeping very strict limits on where/how and when it is used. If we had the same motivations and discipline in software could we walk a line that both benefited from and controlled AI? I don't know the answer.


Good one, I had not made the connection, but yes. Tech is here to serve, at our pleasure, not to be forcibly consumed.


I don't see any cars racing in the Melbourne Cup.


Another great take I found online: "Don't send us a PR, send us the prompt you used to generate the PR."


What I've been begging for every time someone wants me to read their AI "edited" wall of text.


That's a pretty good framework!

Prompts from issue text makes a lot of sense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: