HN2new | past | comments | ask | show | jobs | submit | phito's commentslogin

Definitely this, my hobby is already filled with a ton of sloppy vibe coded apps that all do the same thing very badly with awful UI/UX. They all ask for a monthly subscription and, surprise, nobody uses them.

Meanwhile the hand crafted app that does the same thing gets put in the same bucket as the AI slop ones and is ignored.


I roll my eyes every time I see a coworker post a very a long message full of emojis, obviously generated by a LLM with 0 post editing. Even worse when it's for social communication such as welcoming a new member in the team. It just feels so fake and disingenuous, I might even say gross.

I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.


Some people legitimately have no idea that others recognize and are offput by llm output.

Also, i know a lot of non-native English speakers that use AI tools to "correct things". Because of the language barrier these people especially are less likely to ever be able to recognize the specific llm tone that precipitates.


And that's really the insidious thing about these tools - if you can't do the work yourself then you can't really verify the LLM output either.

Yes the same outraged users were totally fine with giving Discord all their personal conversations.

Exactly, I'm not going to waste my time reading this AI generating post that's basically promoting itself.

What I really wonder, is who the heck is upvoting this slop on hackernews?


Another good example, from yesterday: https://hackernews.hn/item?id=46860845

Articles like these should be flagged, and typically would be, but they sometimes appear mysteriously flag-proof.


I did because I want to see a critical discussion around it. I'm still trying to figure out if there's any substance to OpenClaw, and hyperbolic claims like this is a great way to separate the wheat from the chaff. It's like Cunningham's Law.

It only has 11 points. It just got caught in the algorithm. That's all.

But I see these kinds of post every day on HN with hundreds of upvotes. And it's a thousand times worse on Reddit.

The hundreds of billions of dollars in investment probably have something to do with it. Many wealthy/powerful people are playing for hegemonic control of a decent chunk of the US economy. The entire GDP increase for the US last year was due to AI and by extension data centers. So not only the AI execs, but every single capitalist in the US whose wealth depends on line going every up year. Which is, like, all of them. In the wealthiest country on the planet.

So many wealthy players invested the outcome, and the technology for astroturfing (LLMs) can ironically be used to boost itself and further its own development


I was thinking the exact same thing earlier today. I think you're right. They have so much at stake, infinite money and the perfect technology to do it.

As a francophone, I can tell you that the vast majority doesn't.


Among software developers, the vast majority does.


definitely not at a native level...

Why are you nitpicking this? Are all French people incompetent laggards at speaking English? No, definitely not. There’s nothing about being French which makes you incapable of typing English text and maybe even *gasp* using a spell and grammar checker. The GitHub org shows seven people, is it so hard to believe they’re not absolute dolts at English? Why are you hell bent on insulting yourself?

https://github.com/orgs/suitenumerique/people


Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.


This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.


I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.


You're about a decade+ late to the party, this isn't some movement that happened overnight, it's a slow cultural shift that been happening for quite some time already. Quality and stability used to be valued, judging by what most people and companies put out today, they seem to be focusing on quantity and "seeing what sticks" today instead.


I’m not saying it’s a sudden/brand new thing, I think I’m just really seeing the results of the past decade clearly and frequently. LLM usage philosophies really highlight it.


> I’m not saying it’s a sudden/brand new thing

I was more referencing the whole "I'm starting to worry" while plenty of people been cautiously observing from the side-lines all the trouble "move fast, break things" brought forward, many of them speaking up at the time too.

It's been pretty evident for quite some time, even back in 2016 Facebook was used by the military to incite genocide in Myanmar, yet people were still not really picking up the clues... That's a whole decade ago, times were different, yet things seems the same, that's fucking depressing.


I’m starting to think this is unproductive tbh

Particularly since that mantra started around 2005 or so, which was exactly when Silicon Valley stopped creating companies that could run at a profit without a constant investor firehose.


Could it be that you're creating a stereotype in your head and getting angry about it?

People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.


Well not really, vibe coding is literally brute forcing things until it works, not caring about the details of it.


So manual programming. Humans don't always get everything perfect the first try either.


if history doesn't repeat, but it rhymes

does vibe coding rhyme with eternal september?


IF anything, this is good news for Anthropic, they can now bury every open source project with useless isssues and PRs


Are these superpredator vibe coders in the room with us right now?


Well yeah, it is just better. At my work we have a copilot license, but we use it to access Claude Sonnet/Opus model in OpenCode.


The Copilot-Cli is not so bad,

https://github.com/features/copilot/cli


Can't speak for copilot but Gemini cli is unbelievably bad compared to Gemini web.

CC has some magic secret sauce and I'm not sure what it is.

My company pays for both too, I keep coming back to Claude all-round


Claude Code is one of a very few AI tools where I genuinely think the people at the company who build it use it all the time.


They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.


> They absolutely do, the CEO has come out and said a few engineers have told him that they dont even write code by hand anymore. To some people that sounds horrifying, but a good engineer would not just take code blindly, they would read it and refine it using Claude, while still saving hundreds of man hours.

TBH, that isn't sustainable. Skills atrophy. At some point they are going take the code blindly.

Considering what they have said in the past about agentic code changes, they are already doing just that - blindly approving code from the agent. I say this because when I last read what one of their engineers on CC tweeted/posted/whatever, I thought to myself "No human can review that many lines of code per month"[1].

---------

[1] IIRC, it was something stupid like 30kLoc reviewed in a month by a single engineer.


>>Skills atrophy.

I keep telling my friends while experienced devs feel extremely productive. The newer ones will likely not develop skills needed to work with finer aspects of code.

This might work for a while, but you do a year or two of this, and then as little as a small Python script will feel like yak shaving.



I know they do because of how painfully awful the Claude web/Claude desktops uxui is, as well as performance.


watch the interviews with Boris. He absolutely uses it to build CC.


s/AI//


I would love to hear/see a definitive answer for this, but I read somewhere that the relationship between MS and \A is such that the copilot version of the \A models has a smaller context window than through CC.

This would explain the "secret sauce", if it's true. But perhaps it's not and a lot is LLM nondeterminism mixing with human confirmation bias.


Agreed. I was an early adopter of Claude Code. And at work we only had Copilot. But the Copilit CLI isn't too bad now. you've got slash commands for Agents.MD and skills.md files now for controlling your context, and access to Sonnet & Opus 4.5.

Maybe Microsoft is just using it internally, to finish copying the rest of the features from Claude Code.

Much like the article states, I use Claude Code beyond just it's coding capabilities....


Same situation, once I discovered the CLI and got it set up, my happiness went up a lot. It's pretty good, for my purposes at work it's probably as good as Claude Code.


The Copilot IntelliJ integration on the other hand is atrocious: https://plugins.jetbrains.com/plugin/17718-github-copilot--y...

I'm amazed that a company that's supposedly one of the big AI stocks seemingly won't spare a single QA position for a major development tool. It really validates Claude's CLI-first approach.


It's sluggish in GitHub Codespaces, as it has so many animations.


It's because we see a bunch of people completely ignoring the missing 20% and flooding the world with complete slop. The push back is required to keep us sane, we need people reminding others that it's not at 100% yet even if it sometimes feels like it.


Then you have Anthropic that states on his own blog that engineers fully delegate to claude code only from 0 to 20% https://www.anthropic.com/research/how-ai-is-transforming-wo...

The fact that people keep pushing figures like 80% is total bs to me


It’s usually people doing side projects or non-programmers who can’t tell the code is slop. None of these vibe coding evangelists ever shares the code they’re so amazed by, even though by their own logic anyone should be able to generate the same code with AI.


This kind of thought policing is getting to be exhausting. Perhaps we need a different kind of push back.

Do you know what my use case is? Do you know what kind of success rate I would actually achieve right now? Please show me where my missing 20% resides.


Thought policing, lol. People are just sharing their perspectives, no need to take it personally. Glad it's working well for you.


What kind of software are you writing? Are you just a "code monkey" implementing perfectly described Jira tickets (no offense meant)? I cannot imagine feeling this way with what I'm working on, writing code is just a small part of it, most of the time is spent trying to figure out how to integrate the various (undocumented and actively evolving) external services involved together in a coherent, maintainable and resilient way. LLMs absolutely cannot figure this out themselves, I have to figure it out myself and then write it all in its context, and even then it mostly comes up with sub-par, unmaintainable solutions if I wasn't being precise engouh.

They are amazing for side projects but not for serious code with real world impact where most of the context is in multiple people's head.


No, I am not a code monkey. I have an odd role working directly for an exec in a highly regulated industry, managing their tech pursuits/projects. The work can range from exciting to boring depending on the business cycle. Currently it is quite boring, so I've leaned into using AI a bit more just to see how I like it. I don't think that I do.


That's assuming FAANG engineers are actually great.


They're far more likely to be above average I would say.


Above average in tolerance for immoral business models, certainly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: