HN2new | past | comments | ask | show | jobs | submit | kenforthewin's commentslogin

If that's true, it sounds like the vibe coders are winning - they're creating products people want, and pull in technical folks as needed to scale.

But the argument is not about market validation, the argument is about software quality. Vibe coders love shitting on experienced software folks until their code starts falling apart the moment there is any real world usage.

And about the pulling in devs - you can actually go to indeed.com and filter out listings for co-founders and CTOs. Usually equity only, or barely any pay. Since they're used to getting code for free. No real CTO/Senior dev will touch anything like that.

For every vibe coded product, there's a 100 clones more. It's just a red ocean.


Hey HN - I built an agent harness for NetHack that exposes a Python sandbox for agents to write game commands and script their way to ascension. More recently I built a web app around this framework that allows anyone to watch the agents play live - you can even sign in with OpenRouter and run your own playthroughs! More information about the agent harness here:

https://kenforthewin.github.io/blog/posts/nethack-agent/


We've been optimizing for decades to engineer the bullshit-generating super-soldiers required to craft modern PR statements.

...what?

Hey HN - I built an agent harness for NetHack that exposes a Python sandbox for agents to write game commands and script their way to ascension. More recently I built a web app around this framework that allows anyone to watch the agents play live - you can even sign in with Openrouter and run your own agents! More information about the agent harness here:

https://kenforthewin.github.io/blog/posts/nethack-agent/



> This is insanity masquerading as pragmatism.

> This is not engineering. This is fashion masquerading as technical judgment.

The boring explanation is that AI wrote this. The more interesting theory is that folks are beginning to adopt the writing quirks of AI en masse.


I feel more like AI have adopted some preexisting disagreeable writing styles from the beginning and now we associate these with AI.


The way I like to phrase this sentiment is "This guy is the training data."


Yeah, this is why I have transitioned from "this seems like it was written with AI" to "this is full of clichés." Maybe it was only written by a human or maybe it was written entirely with AI or somewhere in between, but in any case, clichés make it tiresome to read.


At least I'm not the only one who noticed. It's genuinely weird and unsettling how such AI-written blog posts nowadays get to the top of HN easily.


Just a reminder that everyone copes with change differently.


I find the cost discussion to be exceedingly more tedious. This would be a more compelling line of thinking if we didn't have highly effective open-weight models like qwen3-coder, glm 4.7 etc. which allow us to directly measure the cost of running inference with large models without confounding factors like VC money. Regardless of the cost of training, the models that exist right now are cheap and effective enough to push the conversation right back to "quibbling about the exact degree of utility LLMs provide".


Congrats. From my experience, Augment (https://augmentcode.com) is best in class for AI code context. How does this compare?


augment is a coding agent. nia is an external context engine for coding agents that improves their code output quality


Sure, but Augment’s main value add is their context engine, and imo they do it really well. If all they had to do was launch an MCP for their context engine product to compete, I think the comparison is still worth exploring.



yeah, their mcp is to provide better context of your own codebase. not external information.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: