I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.
I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.
Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.
But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.
It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.
A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.
The puzzle thought was mostly me trying to figure out why AI coding was more emotionally tiring when I'm literally doing less and creating more, maybe it's something else.
I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.
DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.
I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.
What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.
The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.
It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"
The profit motive simply does not drive society in this direction.
The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.
Avoiding crisis over the longer term is simply not investable.
"Fair" is not a relevant or useful conception in this context.
Not wasting other people’s time when they expect your work to at least pass a cursory check. It’s selfish and disrespectful. It reflects poorly on you. I don’t know about all that other stuff you wrote but it’s not really what I’m talking about so I’ll clarify.
I don’t know what your high school/college was like, but we used to trade papers for editing. It was universally considered bad practice to send rough/first drafts. It’s disrespectful and wastes the time of people who are being generous with it for you. You’re offloading your work in a selfish way.
Simply put: If I want an LLM’s raw results, I’ll prompt it myself. Why are you involved if I don’t want your work? Your expertise? Want to use an LLM then go for it but don’t just wipe its muddy boots on my work. At least look at the results.
Unfortunately, this is becoming even more common with LLM’s. I have no problem confronting people about it because 100% of the time they don’t want it done to them. It’s not even an argument, it’s catching them being selfish and they know it.
Are the people paying your paycheck being fair to you? Are the executives of your company paid orders of magnitude more than you are? Fairness starts from there. Your job is to be as unexploited as possible. I hope my coworkers also have this goal.
What does my relationship with the c-suite/my work have to do with a colleague dumping their unedited chatgpt crap on to me? I legitimately do not understand what point you’re trying to make. There seems to be a lot of assumptions here and I’m not sure what they are.
Sending your unedited LLM outputs to me is not sticking it to the execs. If you really want to play that game, you go ahead and ship that or hand it to someone who deals with the final output. That’s your prerogative and you can face the consequences. I am not here to clean up your AI slop. That’s not my job. At that point you are the problem, not the c-suite.
All I hear from AI evangelists is “it’s a tool! It’s not the problem! It’s people using it wrong!” Ok, then the people using it are the problem if something is wrong. So if you act this way, which is clearly not a productive use of the tool, you are the problem.
Edit: let me just ask you a somewhat multi-faceted question. If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied? Is that what you wanted me to do? Is there a reason you asked me to do it instead of prompting ChatGPT yourself?
What if I did this every time I had to write anything? Every email. Every summary. Every report. Just prompt, copy, paste, send to you.
> If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied?
Yes. Again my job is to stay unexploited. Saying yes is the easiest option. I'll leave the worrying to the people making an order of magnitude more money than me.
It seems you are either very unhappy at your job or just anti-work, that’s fine you do you/sorry if your work sucks, but there is a huge gradient between “completely not caring and doing the bare minimum to collect a paycheck” and “sacrificing everything for a company that does not care about me.” Many of us fall in that gradient. We do decent work and clock out when we’re done.
If you want to phone it in or act your wage or whatever go ahead but don’t make it my problem. You’re not sticking it to your employer. You’re actively making your workplace worse for everyone else. Your decisions impact others.
This is like working in the service industry and simply not doing your job. Management doesn’t suffer and they’ll just fire you. The people you work with have to do your job for you. What have you actually accomplished?
First of all, I don't agree with your implication that AI produced code is bad. It's as good as the developer prompting it in my experience. Secondly, yes I'm anti-work. Capitalism does not allow for what you are desiring. Capitalism is configured such that capital is seeking maximum return for minimal costs (my pay). I am incentivized to do the opposite. Wealth inequality is a multiplier on how hard I'm going to try to achieve my goal.
> First of all, I don't agree with your implication that AI produced code is bad.
Never said that. I said generating code with an LLM then not looking at it at all and pushing it (which is what started this whole comment thread) is a selfish and lazy decision.
Not everyone prescribes to a strict anti-work stance. Most people don’t in fact. So we’re at quite an impasse and it doesn’t change the fact that your decisions become your colleagues’ problems and does nothing to deconstruct/fight capitalism. I feel sorry for anyone who works with you if this is not an internet routine and reflects how you actually operate.
> Like I said most code that's written by AI is better than code written by human
1) this is an arbitrary bar that needs more qualifiers (all code? All people?) and 2) citation needed.
I don’t care how it was generated. I want you to vet the results at some point with your knowledge and not send me whatever it spits out with no consideration for the results. You’re not sticking it to capitalism when you pass the buck to me. You’re being selfish.
I think we are just too far apart on this to be productive unfortunately. I just urge you to consider the impact of your choices. See my accessibility comment from a different part of the thread:
> I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.
>What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain (or laterally) where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.
Is it correct? Is it any good? Should I subject another person to this? Is it profoundly rude to not even read their email and just have a robot respond automatically?
The slopmonger does not engage with the question at all, because they never cared.
My boss gets annoyed if I try and do things without AI so eventually I caved but I don't see the point in reading it if thats the culture at the company being pushed.
Also anyone else dealing with it is just gonna be dealing with it via AI so it doesn't really matter.
If I worked somewhere where the CEO cared about hand written code I would be writing it and reading it but I don't.
Because you can’t assume everyone else is as indifferent about wasting people’s time as you are. Some of us don’t want to actively make our colleagues/customers miserable. That decision forces me to decide if I will be a part of the problem even if I generally do good work I can stand behind. You’re forcing me into a decision making process purely out of your desire to not do the bare minimum when working. That’s not right.
I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.
What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.
Only with very old technology, its possible force ID validation from silicon to server or even to unlock the cpu cores so if it ever comes to what you suggest that will also happen.
If it was about this why do OpenAI and Anthropic lose their minds when people are training off their output or trying to scrape their systems.
I actually don't have an issue with training off the mass of everyones work if the models are open and free to build upon, it's locking them away and then throwing your toys out the pram when people try and do the same thing that bothers me.
Good question. I actually have a technical answer, believe it or not.
Pre-training is: training a model from scratch on cheap data that sets the foundation of a model's capabilities. It produces a base model.
Post-training is: training a base model further, using expensive specialized data, direct human input and elaborate high compute use methods to refine the model's behavior, and imbue it with the capabilities that pre-training alone has failed to teach it. It produces the model that's actually deployed.
When people perform distillation attacks, they take an existing base model and try to post-train it using the outputs of another proprietary model.
They're not aiming to imitate the cheap bulk pre-training data - they're aiming to imitate the expensive in-house post-training steps. Ones that the frontier labs have spent a lot of AI-specialized data, compute, labor and hours of R&D work on.
This is probably not "fair use", because it directly tries to take and replicate a frontier lab's competitive edge, but that wasn't tested in courts. And a lot of the companies caught doing that for their own commercial models are in China. So the path to legal recourse is shaky at best. But what's on the table is restricting access to full chain of thought, and banning the suspected distillation attackers from the inference API. Which is a bit like trying to stop a sieve from leaking - but it may slow the competitors down at least.
>Ones that the frontier labs have spent a lot of AI-specialized data, compute, labor and hours of R&D work on.
Granted thats time and money but it's an absolute minuscule amount of human hours compared to the scraped data.
We know this for a fact because of parallelization, work of hundreds of millions vs the work of 20-100 even of OpenAIs team worked for the entire lifetimes of the current team and the lifetimes of the offspring of that team and the lifetimes of their offspring even with several lifetimes they still wouldnt have even made a dent in recreating that initial scraped training data.
This is like trying to apply "labor theory of value" to datasets. It doesn't work any better there than it does in economics in general.
It doesn't matter how many human hours went into making a Twitter shitpost. What matters is: how much value does it add to pre-training run, and how easy is it to substitute it for another data source.
"Cheap data" has low training value and is easy to replace. Twitter shitposts are worthless except in aggregate. "Expensive data" is what has high training value and is hard to replace. Things like SFT traces, domain expert RLHF guidance, RLVR bits - that's what the "moat" is.
I’ll never understand modding in this day and age, I got it back in the quake and half-life 1 days when teenagers didn’t have access to commercial game engines but modding total seems crazy to invest time into building on infrastructure you don’t own, you can’t successfully monetize and will likely be taken from you if you do.
So much has to go right for a new game to see even moderate success. In addition to the programming, you need an art director to give your game a coherent style, 2D texture artists, 3D model and terrain artists, UI designers, music composers, narrative writers, etc, and on top of that you need a compelling universe and concepts for all of these people to work from. And then once that's all done, you need competent marketing so people actually know about this game so they can want to play it.
By comparison, with a pre-existing game much of this is already out of the way and amateurs can get pretty far by just kitbashing existing assets and occasionally mimicking them when creating new assets. Marketing can be as simple as, "this thing you liked, but more, in the way you want it". It's a much smaller lift.
>I strongly suspect a lot of their (A/M)RR was coming from extra seats for PMs, developers, etc
their seats system has always been brutal it’s extremely easy to have the seats balloon if you’re not careful and if they’re yearly there is only a 30 day window a year where you can cancel them when the banner to do so appears.
reply