Hacker News .hnnew | past | comments | ask | show | jobs | submit | Incipient's commentslogin

I'm sure a vibe coded internal or external application WILL break a company. The thought process is however, out of 10 companies:

- 2 won't use AI at all and simply be left behind and stagnate (or go bust)

- 2 will partly use AI, and maybe keep up, maybe not

- 1 will go nuts vibe an entire app and explode (see Tea app or whatever)

- 4 will have an inefficient app, suffer reputational damage, lose some money, or similar, but probably survive

- 1 will hit the jackpot and get a 100M ARR company with 4 people.

Stats are of course completely made up, but you get the point.


> 1 will hit the jackpot and get a 100M ARR company with 4 people.

I will point out that at the point where you get an 100M ARR it seems worth it to hire more people regardless.

But I'm guessing that the bar to be hired will be EXTREMELY high, because IMO the best people to hire people in future heavy-AI-automation-era would be basically founder-level visionary leaders who are also subject matter experts who can consistently make good decisions, and giving them 1M+ salaries in exchange.

If you have 100M ARR you can probably afford like 30 of these employees (and the probably exorbitant recruiting fees required to find them) and have them command AI all day. So your company will be extremely small in headcount but still more than 4 people.

(oh and how will this affect wealth inequality? i prefer to not think about it)


I love this little utopian scenarios that make HN average users wet because they relentlessly avoid considering the core issues with mid to long term AI sustainability. Namely, dependence on external models, fucked up model costs subsidization, financial exposure to a downturn that could negatively affect the core product (models). Yet this is the inevitable future and if you dare raising concern you’re a luddite. Man what have we come too

I think it's more complicated than that.

Anything someone can vibe code that gains any level of mild traction can then easily duplicated by all their competitors and in a fraction of the time because the actual hard part, determining the products edges, has already been done for them.


Agreed. This is why I think that platform/network effects will be mandatory to stay afloat in a lot of the tech market pretty soon. (Or other types of unfair advantages that are truly hard to overcome)

Even with network effects, it's still a race between you building a ecosystem and your competitors catching up to you.

However, if you DO have some sort of network effect moat and your competitors DON'T (yet), then you have the only advantage that matters in the world, because remember, vibe-copying goes both ways. You can copy your competitors feature-by-feature just like they can copy you. So you'll just always keep up feature parity while everyone only uses you because you're the established player with the biggest ecosystem, and soon enough you'll turn your temporary advantage into a permanent one.

Note: legacy platforms can't really benefit from this because you probably need to rewrite your product from scratch to fit any sort of cutting-edge AI dev workflow. Whoever creates a AI-native platform and scales it first wins.


> 4 will have an inefficient app, suffer reputational damage

Have we been living in different realities? I can't remember any example of companies in the past 10 years that have suffered reputational damage related to their inefficient apps. And there have been plenty of inefficient apps...


Sorry there should have been an 'and/or' clause in there.

Reputational I was thinking leaking data, or generating wrong information for users etc


Funny. I think that is exactly what's happening with OpenAI (and also Twitter/X/Xai/SpaceXai)

I mean a lot do get reputational damage (e.g. a lot of people hate Jira because how slow it is, or Microsoft Teams - same story) - it's just that nothing comes of it, so "suffered" is perhaps the wrong word here. People curse them and still use them.

Sonos.

Sonos?

>- 2 won't use AI at all and simply be left behind and stagnate (or go bust)

Would why would they? As if their software being made faster is the differentiator?

In my career as a consumer (lol), choice was never about that. It was about the business proposition, pricing, quality of implementation, guarantees the company is gonna be there long term, them not being scumbags, and so on.

If anything software churn put me off, especially when it come at the cost of messing with my established use, or stability.


Most products you consume are probably not software. Pretty much all products you consume are created by companies that use software.

If those companies don't keep up with software, they might not have a competitive edge that their competitors who are keeping up gain.


The software-creation-speed is even less of a factor for companies that don't make software/services.

As the old saying goes: 90% of software projects fail.

Chances are that most projects that use vibe coding will fail, and chances are that most projects that succeed will use LLMs


Yeah, but no needs to pay for software SaaS anymore so no-one is going to be getting a lucky 100MRR business off pure vibecoded software as anyone can just make that in house.

The moat is absolutely about integration, not the underlying tech/model.

Azure Copilot can charge whatever it wants because you can't use anything else.


TLDR: people are cost conscious and cutting out the middle man by going direct to suppliers (or more direct).

I expect management probably didn't do as well as they could have too.

While tech has definitely enabled this shift, it doesn't seem overly relevant, outside of the current doomer views (albeit, it feels, valid views!)


I'm curious to know why you think that. In a sense, I have a feeling we replaced all the middle-men for a single one who controls who and what can be sold and when. But the middle man is there closing whole Venice for personal reasons

Copilot has said they'll be giving out previous months "if you did token pricing, it would cost $x" so a lot of us will have real numbers to actually anticipate our spend.

Personally I'm anticipating agentic coding will be out of my price bracket (a single agent run costing US$20+ is far beyond what I can justify, especially with how often it fails). I'm planning on going back to optimised prompts on one-pass edits.


>there's little correlation between token spend and the quality

My sentiment exactly! I have a very similar scaffold to each of my prompts, and feel I provide similar context files, however sometimes I get a truly inspired, complex, and functionally complete response...and sometimes I'd have been better off running lorem ipsum through a python interpreter.

I can't find any rhyme or reason to success. I'm not sure if prompting is significantly more nuanced than I realise, or it's the statistical magic that's having a laugh at me.

>open source models on inference-optimized hardware.

Is this actually a thing? Or are you talking about some hypothetical "opus 4.7 ASIC"?


You have one agent write the code, a separate one to do the docs, a third to harmonise them.

After a few weeks they'll settle on the documentation for raised garden bed and the implementation of a home defence sprinkler system.

All while leaving you with a $10,000 bill at the end of the month.

Ain't life grand.


The problem is I can't afford the tokens! Even on my $10/mo plan, running either 100 opus, or 300 sonnet agent runs would cost hundreds of dollars - well above my budget!

I wouldn't call it hindsight - I don't think anyone, at any stage, thought running a 10 minute+ sonnet session for 1 premium credit was ever profitable. We all knew it was a loss leader to get people using it.

It would have been profitable if that premium credit cost more than a negotiated discounted rate with Anthropic. We have no way of knowing if there were negotiated rates though!

There is no way to make that cost model profitable consistently. If 1 prompt can mean 100's/1000's of requests over hours, and you only pay for that 1 premium prompt, that can never be profitable.

They can engineer the harness to limit the amount it does. When pressing enter, it's be nice to have a "budget" per prompt, much like the model multiplier. When the harness used up the budget, it cleans up and cuts off the work.

But that would entail actual work and effort...and care for user's time and money.


Guys, you're discussing a house of cards to begin with: No matter how you're paying for the $CURRENTSOTA you're not garunteed that next month what you pay for will be the same.

So, lets do some honest evaluations:

1. The model itself is a non-deterministic engine of work with an unknown value; it's real value is just magic.

2. The business model itself is non-deterministic engine of profit with a known value; whatever the VCs have put into it, _must_ be piulled out. If Ed Zitron's numbers are correct, circa 2030, it's several trillion dollars.

So do some matrix multiplication of non-determinism vs determinism, and realize that the value proposition for _you_ is only going to decrease because #1 can never outpace #2, ensuring enshittification captures a smaller and smaller whale.

We know this. This has been the last 2 decades of money extraction from software. It was ok when it was some 12 year old's parents CC. But now it's you, or your business, that's going to either ben squeeze for value or squeeze out of the market.

And everyones squabbling about the color of the cost. ok


The problem with assuming that tokens can only get more expensive is that the Chinese open weight LLM firms have dropped models which have a known, fixed price that can never get more expensive (since we can run them on hardware we own).

Well, I guess we're not discussing the same thing. The cost of cloud tokens are going to go up. They won't ever be cheaper. They're generating far more tokens than my AMD 395+ w/128GB at a much cheaper rate.

I agree though, it can't get cheaper than the cost of hardware it's just without sufficient documentation of the actual costs to run the cloud models, we can't really know what the "true" cost of each token is. I assume there's an economist out there somewhere that could figure it out though. Certainly, the cost should approach at a minimum a open weights model running on a local machine.

I've succesffully got Qwen3-coder-next to loop and generate sufficiently competent code and from what I can tell, the difference between this and the cloth is how quickly the gen happens and perhas how interactive it has to be.


I'd call it a straight up "bait and switch".

If you paid attention to the power requirements and amount of hardware being put into data centers, you should have realized that it cost them an order of magnitude more than you were being charged. To rework your analogy: they hooked you, now they're gonna see if they can reel you in.

They can only reel you in if its worth it. I still can code.

And while i do not spend 200$ privat, in my startup we discussed this and our current mental model is, that instead of hiring someone new, we prefer to have more money for tokens.

This is easier for us and has a bigger benefit. The cost of a new / first employee is very high, a 200$ subscription is not. Upgrading that to lets say 400 or 800$ is still alot easier and if i can run multiply and better agents with that money, lets goooo.


I'm looking at education -- teachers and students, not terribly tech savvy, are being mandated to use these tools. And then comes the rug-pull. It was worth it, but now it's outside of their budget. Poorer schools / students can't stay at the cutting edge; richer schools / students can.

You still get far with 20$ if you don't use it daily for lots of coding and thinking though.

And Gemma 4 and other open models can easily be hosted even for schools.


Oh, I thought it was opium.

I don't know how you'd confirm either way. A lot of stuff from the 80s is in landfill now because it's just old, even if not broken.

I'm sure we built a lot of badly made stuff then too, but my guess is with our tighter manufacturing tolerances, we can push things closer to breaking point, with our increased casting/molding tech, we make stuff smaller and more complex, so it breaks more, we also drive harder to profit margins (unsourced claim!) so cutting corners/quality is more acceptable/planned obsolescence/planned failure.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: