Hacker News .hnnew | past | comments | ask | show | jobs | submit | camgunz's commentslogin

This is an argument against all laws, which probably deserves more than a couple sentences.

UBI doesn't address structural inequality, and because of inflation it doesn't make necessities (health care, housing, child care, education, food) affordable. We've suffered conservatives opposing simple policies of:

* progressive taxation

* wealth tax

* government training workers, procuring supplies, and investing in advances

...but they're all gone now, so let's get going on this stuff.


It was an insane mistake to disdain the GPL.

I've had a set of Etymotic SR4s for years, I just replace the cable every 1-2 years. I love them to death, they're extremely flat though, so they make a version with bumped bass if that's your thing.

If someone made a cable with a mic on it for them I'd probably buy 10--it's pretty annoying to switch to Apple earbuds for calls, but whatever.


I think SWEs are genuinely pretty shocked and awed that codegen models can code at all, let alone code well. My guess is a lot of the agita around this is that people thought "I can code therefore I'm smart/special/etc." and then a machine comes by that can do pretty equivalent work and they're entirely unmoored. I sympathize with that, and I don't mean to dismiss it, but that's not what I feel. I really dislike this "doer vs. maker" binary stuff that comes up every now and again, as though everyone who thinks codegen models aren't perfect doesn't want to make anything. I really want to make things--good things--and I dislike the current hype wave behind codegen models because they often make it harder for me to make good things.

I've used Claude Code to build a few big things at work; I ask it questions ("where does this happen", "we have problem X, give me 3 potential causes", etc); I have it review things before I post PRs; our code review bot finds real heisenbugs. I have mixed success with all of this, but even so I find it overall useful. I'd be irritated if some place I worked, current or present, told me I couldn't use Claude Code or the like.

That said, I've not gotten it to be useful in:

- building entire, complex features in brownfield projects

- solving systemic bugs

- system design/evolution

- feature/product design and planning

- replacing senior engineer code review

It will confidently tell you it's done these things, but when you actually force yourself through the mental slog of reviewing its output, you'll realize it's failed (you also have to be an expert to perform this analysis). Now, maybe it fails in an acceptable way; maybe only slight revision is required; maybe it one-shots the change and verifying success isn't a big mental slog. Those are the good cases. More annoying are the times it fails totally and obviously, but the real nightmares are when it fails totally, yet imperceptibly. It also sometimes can do (some of) these things! But it's inconsistent, such that its successes largely serve to lower your guard against its failures.

And the mental slog is real. The artifacts you have to produce/review/ensure the model adheres to are ponderous. The code generated is ponderous. Code review is even more tedious because there's no human mind behind the code, so you can't build a mental model of the author. Getting a codegen model to revise its work or take a different approach is very hit or miss. Revising the code yourself requires reading thousands and thousands of lines of generated code--again with no human behind it--and building a mental model of what's happening before you can effectively work, and that process is time-consuming and exhausting.

I'm also concerned about the second-order effects. Because switching into the often-required deep mental focus is very difficult (borderline painful), I've seen many, many people reach for LLMs in those moments instead, first a little, then entirely. I've watched people copy/paste API docs into Gemini prompts to explain them. I've watched people unable to find syntax errors in code and paste it into ChatGPT to fix it. I'm confident I'm not the only person who's observed this, and it's a little maddening it's not getting more play.

---

I'm not saying SWEs don't fail in similar ways. I've approved--and authored--human PRs that had insidious flaws with real consequences. I've been asked to "review" PRs pre-ChatGPT that were 10x the size they needed to be. I've seen people plagiarize code, or just copy/paste Stack Overflow constantly. The difference is we build process around these risks, everything from coding patterns, PR size limits, type systems, firing people, borderline ludicrous amounts of unit tests, CI/CD, design docs, staging environments, red/green deploys, QA lists, etc.

I hate all of it! It's a constant reminder of my flaws and it slows down mean time to dopamine squirt of released code. I'd be the first person to give all this shit the axe. I would love to point Claude at the crushingly long list of PRs I have to review. But I can't, because it still has huge, huge flaws. Code review bots miss obvious problems, and they don't have enough context/knowledge about the system/bug/feature to perform a sufficiently comprehensive review. It would be a net time waste because we'd then have to fix a bug in prod or revise an already-deployed feature/fix--things I like even less than code review, if you can believe it.

These models cannot adequately replace humans in other parts of the SDLC. But, because pesky things like design and code review cap codegen models' velocity, our industry is "rethinking" it all, with no consideration of the models' flaws; "rethinking" here meaning "we're considering having an LLM handle all our code review, or not doing it at all". The only way to describe that is reckless disregard. It's unprofessional and unethical.

So, I think my grief isn't about "the craft". I don't think that's gone and I don't think I'd care if it were. My grief is about the humiliation of our profession, the annihilation of our standards and the betrayal of any representation we made to our users--indeed to ourselves. We deserve software systems that do what they say they do, and up until recently I really thought we were working hard to get there. I don't think that anymore; like many other things in our era (community, truth, curiosity, generosity, trust, learning, rationality, practice, compassion) it has retreated in the face of some flavor of self-interested, shallow grift. I really don't know how or why this happened, but regardless of the cause we truly are in a dark time.


Excellent post!

“I'm also concerned about the second-order effects. Because switching into the often-required deep mental focus is very difficult (borderline painful), I've seen many, many people reach for LLMs in those moments instead, first a little, then entirely. I've watched people copy/paste API docs into Gemini prompts to explain them. I've watched people unable to find syntax errors in code and paste it into ChatGPT to fix it. I'm confident I'm not the only person who's observed this, and it's a little maddening it's not getting more play”

That’s exactly why I stopped using LLM’s. Then people turn around and say “but.. you’ll get left behind.”

Yeah, nah. I value my ability to hold concepts and reason deeply and sit in those painful moments - I’m not letting go of this conditioning that pays dividends over the long term.


> My guess is a lot of the agita around this is that people thought "I can code therefore I'm smart/special/etc." and then a machine comes by that can do pretty equivalent work and they're entirely unmoored.

It was hard to believe the same person wrote the thoughtful rest of the comment and this insulting assumption.


I’m sorry but in this day and age, why would you not use AI with safeguards? With giving it the proper context and best practices you’re looking for. These are all very solved in Claude and any agentic system. Are you saying that you don’t? This just feels insulting to those of us who do care about code but do love Claude

Only the authored parts can be copyrighted, and only humans can author [0].

"For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the 'traditional elements of authorship' are determined and executed by the technology—not the human user."

"In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that 'the resulting work as a whole constitutes an original work of authorship.'"

"Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are 'independent of' and do 'not affect' the copyright status of the AI-generated material itself."

IMO this is pretty common sense. No one's arguing they're authoring generated code; the whole point is to not author it.

[0]: https://www.federalregister.gov/d/2023-05321/p-40


> IMO this is pretty common sense. No one's arguing they're authoring generated code; the whole point is to not author it.

Actually this is very much how people think for code.

Consider the following consequence. Say I work for a company. Every time I generate some code with Claude, I keep a copy of said code. Once the full code is tested and released, I throw away any code that was not working well. Now I leave the company and approach their competitor. I provide all of the working code generated by Claude to the competitor. Per the new ruling, this should be perfectly legal, as this generated code is not copyrightable and thus doesn't belong to anyone.


No software company thinks this, not Oracle, not Google, not Meta, no one. See: the guy they sued for taking things to Uber.


The person I replied to said "No one's arguing they're authoring generated code; the whole point is to not author it.". My point was that people absolutely do think and believe strongly they are authoring code when they are generating it with AI - and thus they are claiming ownership rights over it.


(the person you originally replied to is also me, tl;dr: I think engineers don't think they're authoring, but companies do)

The core feature of generative AI is the human isn't the author of the output. Authoring something and generating something with generative AI aren't equivalent processes; you know this because if you try and get a person who's fully on board w/ generative AI to not use it, they will argue the old process isn't the same as the new process and they don't want to go back. The actual output is irrelevant; authorship is a process.

But, to your point, I think you're right: companies super think their engineers have the rights to the output they assign to them. If it wasn't clear before it's clear now: engineers shouldn't be passing off generated output as authored output. They have to have the right to assign the totality of their output to their employer (same as using MIT code or whatever), so that it ultimately belongs to them or they have a valid license to use it. If they break that agreement, they break their contract with the company.


(oops, I didn't check the usernames properly, sorry about that)

I still don't think this is fully accurate.

The view I'm noticing is that people consider that they have a right to the programs they produce, regardless of whether they are writing them by hand or by prompting an LLM in the right ways to produce that output. And this remains true both for work produced as an employee/company owner, and for code contributed to an OSS project.

Also, as an employee, the relationship is very different. I am hired to produce solutions to problems my company wants resolved. This may imply writing code, finding OSS code, finding commercial code that we can acquire, or generating code. As part of my contract, I relinquish any rights I may have to any of this code to the company, and of course I commit to not use any code without a valid license. However, if some of the code I produce for the company is not copyrightable at all, that is not in any way in breach of my contract - as long as the company is aware of how the code is produced and I'm not trying to deceive them, of course.

In practice, at least in my company, there has been a legal analysis and the company has vetted a certain suite of AI tools for use for code generation. Using any other AI tools is not allowed, and would be a breach of contract, but using the approved ones is 100% allowed. And I can guarantee you that our lawyers would assert copyright to any of the code generated in this way if I was to try to publish it or anything of the kind.


Every contract I've seen has some clause where the employee affirms they have the right to assign the rights to their output (code, etc) to the company.

I'm not really convinced; I think if I vibe code an app, and you vibe code an app that's very, very similar, and we're both AI believers, we probably both go "yup, AI is amazing; copyright is useless." You know this because people are actively trying to essentially un-GPL things with vibe coding. That's not authoring, that's laundering, and people only barely argue about it. See: this chardet situation, where the guy was like "I'm intimately familiar with the codebase, I guided the LLM, and I used GPL code (tests and API definitions, which are all under copyright) to ensure the new implementation behaved very similarly to the old one." Anything in the new codebase is either GPL'd or LLM generated, which according to the copyright office, isn't copyrightable. If he's right, nothing prevents me from doing the exact same thing to make a new public domain chardet. It's facially absurd.


So if I want to publish a project under some license and I put a comment in an AI generated file (never mind what I put in the comment), how do you go about proving which portion of that file is not protected under copyright?

If the AI code isn't copyrightable, I don't have any obligations to acknowledge it.


You're looking at this as the infringer rather than the owner. How do you as a copyright owner prove you meaningfully arranged the work when you want to enforce your copyright?


I was looking at it from the perspective of an owner who simply wants to discourage use outside of some particular license.

There's close enough to zero enforcement of infringement, it's all self policing or violation.


Copyright office says this has to be done case-by-case. My guess is they'd ask to see prompts and evidence of authorship.


> wow that's a lot of code, how will we ever review it?

>> have a model generate a bunch of tests instead

> wow that's a lot of test code, how will we know it's working correctly?

>> review it

> :face-with-rolling-eyes:


Not necessarily. The referenced guidance [0] says: "...copyright will only protect the human-authored aspects of the work, which are 'independent of' and do 'not affect' the copyright status of the AI-generated material itself." If you read the paragraph or two above that one, it really seems like products of agentic coding cannot be copyrighted, as there wouldn't be significant authorship involved.

[0]: https://www.federalregister.gov/d/2023-05321/page-16193


I think the thing that drives me nuts is that, while most people think the result of programming is a program, I disagree. The result of programming is one or more people who have a deep understanding of a problem space. Codegen models still require a human in the loop; that human has to be a software expert. You only become a software expert by writing software.


~40% in a few months is epic


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: