Definitely this, my hobby is already filled with a ton of sloppy vibe coded apps that all do the same thing very badly with awful UI/UX. They all ask for a monthly subscription and, surprise, nobody uses them.
Meanwhile the hand crafted app that does the same thing gets put in the same bucket as the AI slop ones and is ignored.
I roll my eyes every time I see a coworker post a very a long message full of emojis, obviously generated by a LLM with 0 post editing. Even worse when it's for social communication such as welcoming a new member in the team. It just feels so fake and disingenuous, I might even say gross.
I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.
Some people legitimately have no idea that others recognize and are offput by llm output.
Also, i know a lot of non-native English speakers that use AI tools to "correct things". Because of the language barrier these people especially are less likely to ever be able to recognize the specific llm tone that precipitates.
I did because I want to see a critical discussion around it. I'm still trying to figure out if there's any substance to OpenClaw, and hyperbolic claims like this is a great way to separate the wheat from the chaff. It's like Cunningham's Law.
The hundreds of billions of dollars in investment probably have something to do with it. Many wealthy/powerful people are playing for hegemonic control of a decent chunk of the US economy. The entire GDP increase for the US last year was due to AI and by extension data centers. So not only the AI execs, but every single capitalist in the US whose wealth depends on line going every up year. Which is, like, all of them. In the wealthiest country on the planet.
So many wealthy players invested the outcome, and the technology for astroturfing (LLMs) can ironically be used to boost itself and further its own development
I was thinking the exact same thing earlier today. I think you're right. They have so much at stake, infinite money and the perfect technology to do it.
Why are you nitpicking this? Are all French people incompetent laggards at speaking English? No, definitely not. There’s nothing about being French which makes you incapable of typing English text and maybe even *gasp* using a spell and grammar checker. The GitHub org shows seven people, is it so hard to believe they’re not absolute dolts at English? Why are you hell bent on insulting yourself?
Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.
This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.
I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.
You're about a decade+ late to the party, this isn't some movement that happened overnight, it's a slow cultural shift that been happening for quite some time already. Quality and stability used to be valued, judging by what most people and companies put out today, they seem to be focusing on quantity and "seeing what sticks" today instead.
I’m not saying it’s a sudden/brand new thing, I think I’m just really seeing the results of the past decade clearly and frequently. LLM usage philosophies really highlight it.
I was more referencing the whole "I'm starting to worry" while plenty of people been cautiously observing from the side-lines all the trouble "move fast, break things" brought forward, many of them speaking up at the time too.
It's been pretty evident for quite some time, even back in 2016 Facebook was used by the military to incite genocide in Myanmar, yet people were still not really picking up the clues... That's a whole decade ago, times were different, yet things seems the same, that's fucking depressing.
Particularly since that mantra started around 2005 or so, which was exactly when Silicon Valley stopped creating companies that could run at a profit without a constant investor firehose.
Could it be that you're creating a stereotype in your head and getting angry about it?
People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.
They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.
> They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.
TBH, that isn't sustainable. Skills atrophy. At some point they are going take the code blindly.
Considering what they have said in the past about agentic code changes, they are already doing just that - blindly approving code from the agent. I say this because when I last read what one of their engineers on CC tweeted/posted/whatever, I thought to myself "No human can review that many lines of code per month"[1].
---------
[1] IIRC, it was something stupid like 30kLoc reviewed in a month by a single engineer.
I keep telling my friends while experienced devs feel extremely productive. The newer ones will likely not develop skills needed to work with finer aspects of code.
This might work for a while, but you do a year or two of this, and then as little as a small Python script will feel like yak shaving.
I would love to hear/see a definitive answer for this, but I read somewhere that the relationship between MS and \A is such that the copilot version of the \A models has a smaller context window than through CC.
This would explain the "secret sauce", if it's true. But perhaps it's not and a lot is LLM nondeterminism mixing with human confirmation bias.
Agreed. I was an early adopter of Claude Code. And at work we only had Copilot. But the Copilit CLI isn't too bad now. you've got slash commands for Agents.MD and skills.md files now for controlling your context, and access to Sonnet & Opus 4.5.
Maybe Microsoft is just using it internally, to finish copying the rest of the features from Claude Code.
Much like the article states, I use Claude Code beyond just it's coding capabilities....
Same situation, once I discovered the CLI and got it set up, my happiness went up a lot. It's pretty good, for my purposes at work it's probably as good as Claude Code.
I'm amazed that a company that's supposedly one of the big AI stocks seemingly won't spare a single QA position for a major development tool. It really validates Claude's CLI-first approach.
It's because we see a bunch of people completely ignoring the missing 20% and flooding the world with complete slop. The push back is required to keep us sane, we need people reminding others that it's not at 100% yet even if it sometimes feels like it.
It’s usually people doing side projects or non-programmers who can’t tell the code is slop. None of these vibe coding evangelists ever shares the code they’re so amazed by, even though by their own logic anyone should be able to generate the same code with AI.
This kind of thought policing is getting to be exhausting. Perhaps we need a different kind of push back.
Do you know what my use case is? Do you know what kind of success rate I would actually achieve right now? Please show me where my missing 20% resides.
What kind of software are you writing? Are you just a "code monkey" implementing perfectly described Jira tickets (no offense meant)? I cannot imagine feeling this way with what I'm working on, writing code is just a small part of it, most of the time is spent trying to figure out how to integrate the various (undocumented and actively evolving) external services involved together in a coherent, maintainable and resilient way. LLMs absolutely cannot figure this out themselves, I have to figure it out myself and then write it all in its context, and even then it mostly comes up with sub-par, unmaintainable solutions if I wasn't being precise engouh.
They are amazing for side projects but not for serious code with real world impact where most of the context is in multiple people's head.
No, I am not a code monkey. I have an odd role working directly for an exec in a highly regulated industry, managing their tech pursuits/projects. The work can range from exciting to boring depending on the business cycle. Currently it is quite boring, so I've leaned into using AI a bit more just to see how I like it. I don't think that I do.
Meanwhile the hand crafted app that does the same thing gets put in the same bucket as the AI slop ones and is ignored.
reply