But the argument is not about market validation, the argument is about software quality. Vibe coders love shitting on experienced software folks until their code starts falling apart the moment there is any real world usage.
And about the pulling in devs - you can actually go to indeed.com and filter out listings for co-founders and CTOs. Usually equity only, or barely any pay. Since they're used to getting code for free. No real CTO/Senior dev will touch anything like that.
For every vibe coded product, there's a 100 clones more. It's just a red ocean.
Hey HN - I built an agent harness for NetHack that exposes a Python sandbox for agents to write game commands and script their way to ascension. More recently I built a web app around this framework that allows anyone to watch the agents play live - you can even sign in with OpenRouter and run your own playthroughs! More information about the agent harness here:
Hey HN - I built an agent harness for NetHack that exposes a Python sandbox for agents to write game commands and script their way to ascension. More recently I built a web app around this framework that allows anyone to watch the agents play live - you can even sign in with Openrouter and run your own agents! More information about the agent harness here:
Yeah, this is why I have transitioned from "this seems like it was written with AI" to "this is full of clichés." Maybe it was only written by a human or maybe it was written entirely with AI or somewhere in between, but in any case, clichés make it tiresome to read.
I find the cost discussion to be exceedingly more tedious. This would be a more compelling line of thinking if we didn't have highly effective open-weight models like qwen3-coder, glm 4.7 etc. which allow us to directly measure the cost of running inference with large models without confounding factors like VC money. Regardless of the cost of training, the models that exist right now are cheap and effective enough to push the conversation right back to "quibbling about the exact degree of utility LLMs provide".
Sure, but Augment’s main value add is their context engine, and imo they do it really well. If all they had to do was launch an MCP for their context engine product to compete, I think the comparison is still worth exploring.
reply