While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
This is precisely it. If anything, AI gives me more freedom to think about more novel ideas, both on the implementation and the final design level, because I'm not stuck looking up APIs and dealing with already solved problems.
It's kind of freeing to put a software project together and not have to sweat the boilerplate and rote algorithm work. Boring things that used to dissuade me. Now, I no longer have that voice in my head saying things like: "Ugh, I'm going to have to write yet another ring buffer, for the 14th time in my career."
The boring parts where you learn. "Oh, I did that, this is now not that and it does this! But it was so boring building a template parser" - You've learnt.
Boring is suppose to be boring for the sake of learning. If you're bored then you're not learning. Take a look back at your code in a weeks time and see if you still understand what's going on. Top level maybe, but the deep down cog of the engine of the application, doubt so. Not to preach but that's what I've discovered.
Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
Exactly this. Finding that annoying bug that took 15 browser tabs and digging deep into some library you're using, digging into where your code is not performant, looking for alternative algorithms or data structures to do something, this is where learning and experience happen. This is why you don't hire a new grad for a senior role, they have not had time to bang their heads on enough problems.
You get no sense of how or why when using AI to crank something out for you. Your boss doesn't care about either, he cares about shipping and profits, which is the true goal of AI. You are an increasingly unimportant cog in that process.
I actually learn quite a bit from my AI vibe coding and debugging. The other day I had a configuration issue in my codebase that only happened in prod. I didn't understand it (my coworker coded it and he was busy with something else at the time). I asked AI to help and it told me why it was broken and how to fix it, and also fixed it for me.
Since the issue was due to the intersection of k8s and effect, I don't think reading a bunch of docs would have really helped.
Of course I'm sure there's plenty of people who don't care about understanding the bugs and just want to fix things fast. But understanding these bugs helps me prompt/skill the LLM to prevent them in the future.
> I asked AI to help and it told me why it was broken and how to fix it, and also fixed it for me.
This is just it, you didn't learn anything here. In 3 months, you will only remember that AI fixed some issue for you. You will have none of the knowledge and experience that struggling and thinking and googling and trying things out until it works provides. You may as well have asked some other person to fix it, and they would at least have learned something.
Also, anyone could be plugged into your job when it works this way. All they need is someone who can type into a prompt. Which is much easier to find than someone who actually knows the what and how and why of the code. But hey, you fixed the bug, it ships, boss makes money, the company wins. But you sure don't...
> In 3 months, you will only remember that AI fixed some issue for you.
That's not exclusive to AI. I've solved plenty of bugs pre-AI that I would go down similar rabbit holes to fix again without AI. I've spent days hunting down bugs like this in the past while only remembering that I spent days on the bug, not anything meaningful. It's not something I enjoy repeating.
> Also, anyone could be plugged into your job when it works this way. All they need is someone who can type into a prompt.
Maybe. The reality is that my coworkers of varying experience levels do attempt to vibecode/debug and are never happy with the results. I don't know what they're prompting, but it just goes to show that it's just as easy as just "typing into a prompt."
> you fixed the bug, it ships, boss makes money
Yeah, that's how it's always been, no? Boss doesn't care how it got fixed as long as it got fixed, and added points if it got fixed quickly so I can work on other features. I may not win long-term if I do use AI, but I certainly don't win short-term if I don't use AI, because I can't afford to spend days to fix a bug that AI can fix in an hour.
If learning about individual cogs is what's important, and once you've done that it's okay to move on and let AI do it, than you can build the specific thing you want to learn about in detail in isolation, as a learning project — like many programmers already do, and many CS courses already require — perhaps on your own, or perhaps following along with a substantial book on the matter; then once you've gained that understanding, you can move on to other things in projects that aren't focused on learning about that thing.
It's okay not to memorize everything involved in a software project. Sometimes what you want to learn or experiment with is elsewhere, and so you use the AI to handle the parts you're less interested in learning at a deep and intimate level. That's okay. This mentality that you absolutely have to work through manually implementing everything, every time, even when it's not related to what you're actually interested in, wanted to do, or your end-goal, just because it "builds character" is understandable, and it can increase your generality, but it's not mandatory.
Additionally, if you're not doing vibe coding, but sort of pair-programming with the AI in something like Zed, where the code is collaboratively edited and it's very code-forward — so it doesn't incentivize you to stay away from the code and ignore it, the way agents like Claude Code do — you can still learn a ton about the deep technical processes of your codebase, and how to implement algorithms, because you can look at what the agent is doing and go:
"Oh, it's having to use a very confusing architecture here to get around this limitation of my architecture elsewhere; it isn't going to understand that later, let alone me. Guess that architectural decision was bad."
"Oh, shit, we used this over complicated architecture/violated local reasoning/referential transparency/modularity/deep-narrow modules/single-concern principles, and now we can't make changes effectively, and I'm confused. I shouldn't do that in the future."
"Hmm, this algorithm is too slow for this use-case, even though it's theoretically better, let's try another one."
"After profiling the program, it's too slow here, here, and here — it looks like we should've added caching here, avoided doing that work at all there, and used a better algorithm there."
"Having described this code and seeing it written out, I see it's overcomplicated/not DRY enough, and thus difficult to modify/read, let's simplify/factor out."
"Interesting, I thought the technologies I chose would be able to do XYZ, but actually it turns out they're not as good at that as I thought / have other drawbacks / didn't pan out long term, and it's causing the AI to write reams of code to compensate, which is coming back to bite me in the ass, I now understand the tradeoffs of these technologies better."
Or even just things like
"Oh! I didn't know this language/framework/library could do that! Although I may not remember the precise syntax, that's a useful thing I'll file away for later."
"Oh, so that's what that looks like / that's how you do it. Got it. I'll look that up and read more about it, and save the bookmark."
> Unless you already have the knowledge, then fine. "here's my code make it better" but if it's the 14th time you've written the ring buffer, why are you not using one of the previous thirteen versions? Are you saying that the vibed code is more superior then your own coding?
There are a lot of reasons one might not be able to, or want to, use existing dependencies.
I assume you use JavaScript? TypeScript or Go perhaps?
Pfft, amateur. I only code in Assembly. Must be boring for you using such a high-level language. How do you learn anything? I bet you don't even know what the cog of the engine is doing.
Can you elaborate on the implied claim that you've never built a project that you spent more than two months thinking about? I could maybe see this being true of an undergraduate student, but not a professional programmer.
Not to put words in their mouth, but it seems like they mean they used to think about a problem and then spend X minutes typing the solution in via the keyboard that they no longer have to do.
Yesterday I had two hours to work on a side project I've been dreaming about for a week. I knew I had to build some libraries and that it would be a major pain. I started with AI first, which created a script to download, extract, and build what needed. Even with the script I indeed encountered problems. But I blitzed through each problem until the libraries were built and I could focus on my actual project, which was not building libraries! I actually reached a satisfying conclusion instead of half-way through compiling something I do not care about.
You don't fit the profile OP is complaining about. You might not even be "vibe" coding in the strictest sense of that word.
For every person like you who puts in actual thought into the project, and uses these tools as coding assistants, there are ~100 people who offload all of their thinking to the tool.
It's frightening how little collective thought is put into the ramifications of this trend not only on our industry, but on the world at large.
Who cares if some idiot makes some ai shit and doesn’t learn anything? That same person has had access to a real computer which they’ve wasted just as effectively until now.
I think you're missing the general point of the post.
>AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.
The trigger for the post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders, whatever their coding chops are. For Show HN posts, the sentence I quoted precisely describes the things that would be mind-numbingly boring to Show HN readers.
pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves and also have some biochemist commenting, "I'm working at a so-and-so research lab and this is exactly what I was looking for!"
Now the biochemist is out there vibe-coding their own solution, and now, there is no way for the HN reader to differentiate your "robust" entry from a completely vibe-code noobie entry, no matter how long you worked on the "important stuff".
Why? because the barrier of entry has been completely obliterated. What we took for granted was that "knowing how to code" was a proxy filter for "thought and worked hard on the problem." And that filter allowed for high-quality posts.
That is why the observation that you know longer can guarentee or have any way of telling quickly that the posters spent some time on the problem is a great observation.
The very value that you gain from vibe-coding is also the very thing that threatens to turn Show HN into a glorified Product Hunt cesspool.
"No one goes there any more, it's too crowded." etc etc
Why do I care who made the thing showed to HN? If someone makes a tool that I like or a project that’s amazing, but they did so with a robot, who is harmed?
Like all we need to do is decouple “I made this” from “I can compose all parts in my mind”, which were never strongly coupled anyway. Is the thing that is being shown neat? Cool! Does it matter if it was a person or 20 people or a robot? I don’t think so, unless it’s special pleading for humans.
I think this is quite a strange argument. Any technical show-and-tell in the form of 'I wrote a cool implementation of such-and-such algorithm' is obviously much less impressive if someone/something else wrote it, but that's always been true, and I think the Show HN format is largely used for tools or products that someone has created, in which case what's more interesting is the problem it solves and how it solves it. It's exactly as you say with your hypothetical biochemist; they've been looking for a tool like this! I don't think they spent much time worrying about how it was written or what the REST API would look like.
There is a proliferation of frameworks and libraries supplying all kinds of mundane needs that developers have; is it wrong for people Showing HN to use those? Do libraries and frameworks not lower the barrier to entry? There have been many cases of 'I threw this together over a weekend using XYZ and ABC', haven't there? What's interesting is how they understand the domain and how they address the problems posed by it - isn't it? Sure, the technical discussion can be interesting too but unless some deep technical problem is being solved, I don't care too much if they used Django or Flask, and which database backend they chose, unless these things have a significant impact on the problem space.
> the barrier of entry has been completely obliterated
I was very interested in 3D graphics programming back in the DOS days before GPUs were a commodity, and at that time I felt the same about hardware accelerated rendering - if no-one needs to think about rasterisation and clever optimisation algorithms, and it's easy to build a 3D engine, I thought, then everyone and their dog will make a game and we'll drown in crappy generic-looking rubbish. Turns out that lowering barriers to entry doesn't magically make everything easier, but does allow a lot more people to express their creativity who otherwise would lack the knowledge and time to do so. That's a good thing! Pre-made engines like Godot remove an absolute ton of the work that goes into making a game, and are a great benefit to the one-man-bands and time-strapped would-be game designers out there whose ideas would otherwise die in the dark.
You seem to be insisting on arguing against arguments that have not been made and ignoring the whole point of the original post.
I am having to repeat the beginning of my previous comment:
>>The trigger for the [original] post was about post-AI Show HN, not about about whether vibe-coding is of value to vibe-coders.
The topic is: The drop in quality of post-AI Show HN. It is specifically about this community. Please read the context the OP has referenced in their own post:
Instead of adressing the specifics of that post you seem to ignore the points that were made there and seem to prefer to talk about why vibe-coding solutions should be interesting to pre-AI programmers. Ok, let's go there.
>if no-one needs to think about rasterisation and clever optimisation algorithms, and it's easy to build a 3D engine, I thought, then everyone and their dog will make a game and we'll drown in crappy generic-looking rubbish. [Turns out that's not the case.]
Here in this context, you are confusing "easy" with "non-human". Specifically, when people here decry the banality and tediousness of perusing and reviewing vibe-coded solutions by "everyone and their dog" the emphasis is on and their dog. Let's be clear, a non-deterministic non-human entity that is coding something by approximating the intentions of a human is not the same thing as a human developing a 3D engine or SDK end-to-end with human intentionality no matter how "easy" coding a 3D engine has become. So it leaves it to the HN reader to figure out what level of ownership the human poster has over their 90% vibe-coded solution. It's no surprise that HN readers, when alerted to the possibility via a Show HN post, would rather just vibe-code a solution themselves if they are interested in the problem space instead of engaging with the Show HN post itself. When hard-pressed, I can think of very few instances where programmers would not prefer to vibe-code there own solution instead of test-running and reviewing someone else's AI slop. Some of the casual statistics that the original posters have bothered to look at seem to bear this out.
Sorry, with respect I think you've missed the point of my comment (which was a reply to your comment, and not a reply to the original post).
You asserted that
> pre-AI, what was impressive to Show HN readers was that you were able to actually implement all that you describe in that sentence by yourselves...
and latterly
> ... HN readers, when alerted to the possibility via a Show HN post, would rather just vibe-code a solution themselves if they are interested in the problem space
and my point is that I disagree - the implementation of an idea in terms of the actual coding is far less interesting to me (and my assertion is: by extension, less interesting to the average reader) than the implementation in terms of the behaviour of the thing. Perhaps you're concerned about someone opening Claude Code and typing "Write me an application that does XYZ" but it's pretty obvious that so far that doesn't produce anything useful, and I think is more of a problem for sites like Stack Overflow where an answer is a small singular thing rather than an entire system.
There is a spectrum between 'writing it all yourself' and 'YOLO vibe-coding' and if you're only arguing about the latter end of the spectrum then, sure, those tend to suck, but I don't think we're really at risk of being drowned in those projects; that's a kind of slippery-slope argument. This is why I talked about 3D graphics; I earlier feared the 'YOLO 3D game' projects taking over, and that just hasn't happened. I believe we (humans) had similar discussions around the time that typewriters and the printing press were invented - 'if you're not handwriting your ideas then you're not really thinking!' but the ideas are the point, not the process of writing them down.
I tend to agree, this has been my experience with LLM-powered coding, especially more recently with the advent of new harnesses around context management and planning. I’ve been building software for over ten years so I feel comfortable looking under the hood, but it’s been less of that lately and more talking with users and trying to understand and effectively shape the experience, which I guess means I’m being pushed toward product work.
That's the key: use AI for labor substitution, not labor replacement. Nothing necessarily wrong with labor saving for trivial projects, but we should be using these tools to push the boundaries of tech/science!
Lmfao. The front page is littered with whining about the craft from people who can’t argue coherently why I should go back to getting yelled at by a linter.
It’s all “I can’t think anymore” or “software bad now” followed by a critique of the industry circa 2015.
Most of the people making cool stuff with LLMs are making it, not writing blog posts hoping to be a thought leader.
Google: "Google CEO Sundar Picchai has been a fixture at the White House, attending parties and events. He oversaw Google’s $22 million donation to the White House ballroom and its $1 million donation to Trump’s inaugural fund. Brin, meanwhile, has become a Trump supporter."
So are Elon, JPMorgan Chase and IBM "anti-education"?
From the article you linked:
“I think the value of a college education is somewhat overweighted,” Musk said in a video he later reposted on X. “Too many people spend four years, accumulate a ton of debt, and often don’t have useful skills that they can apply afterwards.”
And because some young people have already caught on—and begun exploring alternative education pathways—many companies like JPMorgan Chase and IBM have scaled back their degree requirements on job postings. Michael Bush, the CEO of Great Place to Work, predicts this trend will only continue to grow.
“Almost everyone is realizing that they’re missing out on great talent by having a degree requirement,” he previously told Fortune. “That snowball is just growing.”
It's certainly ... convenient that any criticism of a wealthy tech oligarch can be dismissed as "anti-education" as a rhetorical cudgel. It's not like others may actually have problems with say, his actual beliefs, the specific methods Tan uses to "promote" education (attacking school teachers/etc), further concentration of power to SV billionaires, etc.
And as you say, somewhat comical to lob that accusation given the current crop of tech oligarchs are firmly aligned with a overtly intellectual/anti-education movement on the right and are systematically working to dismantle higher education at this very moment.
As an aside, there has been a crazy amount of (brigaded?) flagging/downvoting of comments critical of Tan in this thread. Each time I check back I see fairly anodyne comments go back and forth from grey, with a few eventually nuked by flagging). Can't think of a similar recent example of an HN thread with the same patterns even with highly charged/controversial topics.
Costplus manufactures and sells drugs at a small markup over manufacturing cost.
TrumpRX appears to be more of a central clearinghouse where drug makers can offer discounts to consumers. And at least so far, they seem to be the same discounts that they already offer, when you look up a drug on the site, it redirects you to the manufacturer's website.
Maybe it's useful for people without insurance that don't know how to search look for discount programs to help them buy drugs and maybe some manufacturers will offer discounts on the site that aren't available otherwise, but it's not a competitor to Cuban's site.
I haven't heard of this but he must be pretty proud of it, the page title is literally, "Homepage of Mark Cuban Cost Plus Drugs", his name is in the logo, and the picture is on the front page.
The branding is... weird. I always thoughts it was just a campaigning tactic for Mark's upcoming presidential bid but given how things have played out, now I'm not so sure.
I’ve jumped on several threads where he’s discussing the product and it seems as though he enjoys the idea of putting PBMs out of the public’s misery and making it kind of personal.
then also it’s an invitation for debate or discussion, which he seems open to and approachable about… as scoldy and confrontational as that may be from what I’ve gathered …
the gist of that is “this sucks we want single payer”
and what I’ve surmised so far is, that’s great but until we’re there I’m taking on PBMs
im okay with either or both of those objectives , neither of which are advancing.
Not really on topic for “trumprx,” forgive me. I’m trying to avoid contempt before investigation.
Some, maybe, but that's just another nice layer of plausible deniability.
The truth is that the internet is both(what's the word for 'both' when you have three(four?) things?) dead, an active cyber- and information- warzone and a dark forest.
I suppose it was fun while it lasted. At least we still have mostly real people in our local offline communities.
I was curious so I looked it up. Your description of the events isn't quite accurate IMHO. There was an objection to a Meta datacenter, but then state lawmakers passed new laws after losing the business to NM. It doesn't look like anyone was "fooled" by the anonymous bid but rather they simply changed their minds/laws.
> In 2016, West Jordan City sought to land a Facebook data center by offering large tax incentives to the social media giant. That deal ultimately fell through amid opposition by Salt Lake County Mayor Ben McAdams and a vote of conditional support by the Utah Board of Education that sought to cap the company’s tax benefits.
> That project went to New Mexico, which was offering even richer incentives.
> Three months after the Utah negotiations ended, state lawmakers voted in a special session to approve a sales tax exemption for data centers. The move was seen by many as another attempt to woo Facebook to the Beehive State.
So basically they first said "No", lost the bid, had FOMO so they passed new laws to attract this business.
>Asked about the identity of the company, Foxley said only that it is “a major technology company that wants to bring a data center to Utah.”
>And that vision could soon be a reality, after members of the Utah County Commission voted Tuesday to approve roughly $150 million in property tax incentives to lure an as-yet-unnamed company — that sounds an awful lot like Facebook — to the southern end of Pony Express Parkway.
I admit I may be missing broader context about the state, this was specifically from someone working for Eagle Mountain city planning. But the article you've cited is later in the process than what I'm talking about.
When I heard "AI DJ" I got excited for beat matching/mixing. But this is really just a "Discover Mix" with an annoying AI voice to interrupt. Not sure why they felt it necessary to replicate the most annoying part of a radio DJ (talking).
Sounds like I'm looking for two other products: Party Mode (2015) and Auto-Mixing Testing (2018)
We do have the spotify mix option as a beta option in France but it is quite annoying... I definitely prefer DJ pro engine mix when it comes to auto mixing. Thanks for your feeback!
I fed about 4ish years of blood tests into an AI and after some back and forth it identified a possible issue that might signal recovery. I sheepishly brought it up with my doc, who actually said it might be worth looking into. Nothing earth shattering, just another opinion.
reply