Hacker News .hnnew | past | comments | ask | show | jobs | submit | vanillameow's commentslogin

I ran this just now and for a small web-app I built I used over $50 in a single day. This was using superpowers plugin and almost exclusively coordinating through Opus. Could I get by with 100$ a month without the subscription? Maybe, but I pay for the convenience of just being able to throw Opus with lavish plugins at it (with 5h limits that are, in my opinion, pretty reasonable). I don't really WANT to have to think about when Haiku or Sonnet are enough.

If anything I would consider switching to OpenAI subscription (if I didn't despise them even more than Anthropic as a company), but converting to API use seems completely infeasible to me. I'd have to severely cut back on my use for not much benefit, other than having maybe an agent thats a little less jank than CC.


Depending on your workflow, in the spirit of reallocating $100/Month subscription, it may be worth dropping to the $20/Month plan (or equivalent at other providers) and then pay as you go on the (rare) occasions you "build a small web-app I built and used over $50 in a single day".

But at that point we are just min/maxing the details, and all I can say is if you are on a $100/$200 a month subscription to any of these services and not using them regularly then you shouldn't be on a $200 subscription any more than you should be on a $700 a month gym membership when you go every 3 months for 15 minutes.


Nah I get you, but for me personally, I do use CC a ton. It's building me so many useful internal tools right now, and deep research is also bootstrapping me into some new hobbies etc. I think I'm kind of in a rare-ish (maybe not so much on HN, but for the general population) spot where I'm not really trying to make some SaaS get quick rich scheme, but just directing CC to make apps that would take me a few days to make in a few hours, smoke test them, and solve a problem I had before. (e.g. photo tagging automation, MCP connections for personal services for documentation of chats or setting up todos, ansible playbooks for VMs, setting up data pipelines from external APIs for home automation...)

I deffo get more perceived value out of it than the 100$ I pay. Could I get MORE value with the same 100$? imo only through OpenAI (no harness lock in and more lenient limits), but I deeply dislike the way their company is evolving. Admittedly, recent launches from Anthropic like managed agents and Mythos Preview don't make me very hopeful the individual developer pricing is here to stay, but I'll use what I can get while I can get it.

Could I get my required value with less than 100$? Mayyybe I could get by with like, three Anthropic 20$ plans? or 2x20$ and an OAI 20$? but this is so min-maxy that I just don't really want to bother. Pay by token would kill my workflow instantly. I'd have to add so many steps for model selection alone. I'll cross that bridge when Anthropic cuts me off.

I agree though most people on the $200 plans are either just not using them or in some deep AI psychosis. I'd like to exclude myself from these groups, but the pipeline to AI psychosis seems very wishy washy to begin with (the thread the other day about iTunes charts being AI dominated had a surprising amount of people defending AI music, imo).


I dont understand how CC can burn that much money. Iv'e built many webapps using copilot, and our normal business tokens rarely run out. I would say Ive never exceeded 150% of a normal months tokens.

I think Opus is just an expensive model on API, especially without context management. A single message with near full context (I think this was still on 250k as well) costs like 1$ or something like that iirc.

Imo this is the premium I pay right now to just not have to worry about this. The project where I burned 50$ in a day was using superpowers plugin (A set of skills that makes Claude meticulously plan out design and implementation, interview for details, use subagents for subtasks and review them independently, etc.) - it burns tokens like crazy, but it has super good results for me for custom software tools for myself.

I would probably change my approach if I a) was creating software for customers where I had to actually worry about the implementation details or b) if I was forced to switch to API and couldn't just throw Opus at a 28-task plan for an hour. But this works for me right now so meh. I feel like I'm in some rare Goldilocks zone where Anthropic is not super ripping me off (I use CC quite heavily and generate real value for myself) but I also don't go crazy if I go 2 days without building the next SaaS startup.


Genuine question - your README is full of em-dashes, emojis, feature squares and ASCII diagrams - none of which are present in your pre-AI era projects.

Why do you expect a potential userbase to care to read something you didn't even care to write?

Seems a bit disrespectful to me.


Provided he reviewed it and checked the readme is telling the users what it needs to tell them - what's the issue? I've found documentation to be one of the better tasks AI can perform and see no reason why not to use it provided a human is in the loop.


Just looking at the diagrams in the README, the broken ASCII suggests to me it either wasn't looked at or the author didn't care.


I agree, but its tricky as many people seem to not read it and I have seen AI documentation that is so verbose and dense that its almost as useless as not having it. Its a fine line but so long as the AI documentation is reviewed and reasonable then I see no issue.


1. In reality most people simply do not do this, and frankly it's exhausting to be expected to always assume goodwill in a setting that is full of pure vanity.

2. There's a difference between technical documentation, which AI can be quite decent at, and product marketing. A README is usually about 20/80, maybe 50/50 for large FOSS projects. You can have the AI write the sections on how to install the thing for all I care, but as soon as AI is telling me why I should use it, you've lost me. Signals a complete lack of interest in your own product.


It's a strong signal of low quality.

The question is: "Should I spend my time engaging with this project?"

The AI-forward presentation says: "Absolutely not."


Potential userbase... you mean people who use OpenClaw? Not sure if they care.


Probably not the typical OpenClaw user. I’ve had that thought myself.

But I can imagine that medium-sized companies will want to use AI as a backend in the future, without wanting to be dependent on Antropic.

After all, there are already quite a few companies using OpenClaw.

A self-hosted OpenClaw instance (or other solutions in the future) with Relay would be a good alternative to Claude Cowork.


I do think a properly hand-written readme it better, but if not, here's a blatant plug - https://github.com/joelio/plain-english


If you need to author a lot of emails with LLM you should be rethinking your business strategy tbh


I'm surprised to see this getting so much positive reception. In my experience AI is still really bad with documenting the exact steps it took, much more so when those are dependent on its environment, and once there's a human in the loop at any point you can completely throw the idea out the window. The AI will just hallucinate intermediate steps that you may or may not have taken unless you spell out in exact detail every step you took.

People in general seem super obsessed with AI context, bordering on psychosis. Even setting aside obvious examples like Gas Town or OpenClaw or that tweet I saw the other day of someone putting their agents in scrum meetings (lol?), this is exactly the kind of vague LLM "half-truth" documentation that will cascade into errors down the line. In my experience, AI works best when the ONLY thing it has access to is GROUND TRUTH HUMAN VERIFIED documentation (and a bunch of shell tools obviously).

Nevertheless it'll be interesting to see how this turns out, prompt injection vectors and all. Hope this doesn't have an admin API key in the frontend like Moltbook.


That can happen if the history got compacted away in a long session. But usually AI agents also have a way to re-read the entire log from the disk. Eg Claude Code stores all user messages, LLM messages and thinking traces, tool calls etc in json files that the agent can query. In my experience it can do it very well. But the AI might not reach for those logs unless asked directly. I can see that it could be more proactive but this is certainly not some fundamental AI limitation.


I have completely different experience. Which models are you talking about? I have no trouble at all with AI documenting the steps it took. I use codex gpt5.4 and Claude code opus 4.6 daily. When needed - they have no issue with describing what steps they took, what were the problems during the run. Documenting that all as a SKILL, then reuse and fix instructions on further feedback.


I use mainly Opus 4.6.

I did the same thing and created a skill for summarizing a troubleshooting conversation. It works decently, as long as my own input in the troubleshooting is minimal. i.e. dangerously-skip-permissions. As soon as I need to take manual steps or especially if the conversation is in Desktop/Web, it will very quickly degrade and just assume steps I've taken (e.g. if it gave me two options to fix something, and I come back saying it's fixed, it will in the summary just kind of randomly decide a solution). It also generally doesn't consider the previous state of the system (e.g. what was already installed/configured/setup) when writing such a summary, which maybe makes it reusable for me, somewhat, but certainly not for others.

Now you could say, "these are all things you can prompt away", and, I mean, to an extent, probably. But once you're talking about taking something like this online, you're not working with the top 1% proompters. The average claude session is not the diligent little worker bee you'd want it to be. These models are still, at their core, chaos goblins. I think Moltbook showed that quite clearly.

I think having your model consider someone else's "fix" to your problem as a primary source is bad. Period. Maybe it won't be bad in 3 generations when models can distinguish noise and nonsense from useful information, but they really can't right now.


Isn’t what you’ve just described - the context bloat problem, the part about the web?

I’m not sure I quite get the same experience as you with the “assumes steps it never took”. Do you think it’s because of the skills you’ve used?

I also disagree that having at least some solution to a similar problem is inherently bad. Usually it directs the LLM to some path that was verified, if we’re talking about skills


The steps they say they took and steps they took are not the same thing.


I am not sure how I feel about all these hype-driven tools honestly, especially considering they are super janky since probably rushed out with Claude Code.

It reminds me that I don't really like Anthropic as a company, I just like Claude as a model a lot. It just feels more capable and personable than the others. I wonder if / when OpenAI et al. will be able to replicate it.

For now, I basically have no choice but to use the walled garden but I do hope Anthropic is not completely compromising their core mission of actually making the model better rather than following these public bandwagons.

Then again most of these probably take them like a day to develop through a junior dev talking to Claude Opus 5 or some shit lol (and to be fair, it shows). I don't know.


Very well put. I love Claude but anthtopic as a company sucks.


This LinkedIn translator was a genius move by Kagi, honestly. A lot of people are incredibly tired not only of AI(-adjacent) writing, but overall of people with a stick up their ass thinking they're this generation's Aristotle.

Having their slop written out in plain English really shows you how vain it all is.


They actually did not add LinkedIn specifically. It's an AI translator that accepts anything in the `to` field.

https://translate.kagi.com/?from=en&to=Crypto%20Scammer&text...


So I've seen. It's just the LinkedIn one is what they advertised. Speaks to the fact that it's probably some slopcoded thing, which I'd usually get mildly upset about but who can muster the effort in this economy. I think the point still stands though.


They should make an extension for it (assuming it really is a small model behind the scenes and is relatively cheap to serve)


You shouldn't pick up fights with them, even if they run out of ammo, they'll just use the stick up their ass as a backup weapon.

I'll show myself out of the Citadel.


Most people don’t read Aristotle. If thousands of wannabe philosophers can paraphrase some of his ideas and make them accessible to the masses, that’s a net plus. If they can stroke their vanity along the way, even more people win.

It’s much better than writing bitter diatribes.


To be fair, he’s a much, much drier read than Plato.


...what is your actual point? I'm pretty sure none of the shit I read on LinkedIn is making "philosophical ideas accessible to the masses", it's churned out 20x regurgitated self-promotional material.

Is this a bot post?


Well, their Bio is full of linkedin speak, so make your own conclusions...


I just translated that to English via Kagi so that I could understand it.


haha. No, actual human here.

My point is that some people find this stuff valuable. You're not the target audience.


I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?

"I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle."

"Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one."

etc. I do believe AI currently accelerates businesses, especially in software dev. We work with a contractor who use Claude Code to reach incredible development pace for the size of their team, but also when we sit down with them in meetings they understand what's being created, they are able to argue their architectural choices, and they know how to propose business value.

You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then a) everyone can do it, so you won't actually have any value to propose, and b) Once the AI can run businesses without humans in the loop, you can bet your ass they will not out of the goodness of their hearts keep giving that ability away for $20.

In summary, AI if used to accelerate businesses _CAN_ be good. Buying it as a magic bullet to bring you out of poverty is probably a worse choice than just buying a lottery ticket.


That really reminds me of the "mashup" bubble in the late 2000's, when all services started to provide API and people were calling themselves "entrepreneurs" for combining 2 sources of data, like putting craigslist ads on a map.

That didn't last long!


Are you sure? We have many SaaS and final products which are just stitching together more SaaS. We have a very vocal part of the HN community always reminding you to buy a SaaS solution and connect it to your business instead of maintaining an in-house bespoke solution.


Isn't almost everyone doing that. Deploy docker to AWS connecting to Slack, Open AI and Anthropic to do X Y Z.


that's like saying my job is to transfer money from my employer to the homeowner. Technically true but something else happens in the process


So you saying mashups were literally just connecting 2 things and selling it.


I think that there's a "time window" right now, before most people realized the scale of AI. Those who jump there first, can monetize it. It certainly won't last forever, but you can earn some money while it lasts. And you will have years of AI-relevant experience afterwards.


Not incorrect, but it honestly borders on grifting a lot of the time imo. At least it's a spectrum. If you are supercharging your existing technical and domain knowledge, and actually caring about the security of your customers while doing so, fair play. That is real entrepreneurship.

Then there's people who are "well intentioned", I guess, but lack the technical knowledge. A friend of a friend with no technical background is selling websites to companies that he writes with Claude. They look shiny, everyone's happy in the short run, but I don't doubt issues will come up down the line that someone will have to be responsible for. I'd personally feel like I was ripping people off doing this, but I think also Dunning-Kruger prevents you from knowing any better if you are the type of person doing this.

Then there's the whole B2B SaaS gang that are basically just producing vaporware and telling other people how to produce more vaporware. This is no different from crypto, NFTs etc. before it really. Just people trying to hustle others.

And then there's the whole clawdbot gang probably burning more in tokens everyday than normal people use in a month so they can sort 18 e-mails.

So yeah I mean you're right, there certainly is a subset of people who are using this ethically (as ethically as you can use LLMs but that's another story) to make some money on the side. Certainly not the majority though I'd say.


If the technology becomes cheaper, this creates more market pressure, by changing the cost base of certain product. For example books when printing press was invented went from luxury to something expensive but more affordable. In software markets that means that will have more software, more competition and in free market segments profits will evaporate.

The pseudo "entrepreneurs" who think they could outsmart the market by working less, are just naive. In a free market economy optimization is brutal and a freelancer developer will sell the same "product" cheaper, because he has the same technology available to him.

So the only way to get the gains from these AI technologies is to have something that can't be easily copied like market knowledge, data access or sweetheart deals with big companies that can pay more because their profits support the higher spend.

Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation. But the margins will go waaay down. 25$ for a set of forms and a database, not gonna cut it anymore.


> Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation.

True in the current state of LLMs, possibly not true forever if someone finds the magic bullet that turns the one-shotting (reliable) software dream that companies like Anthropic and Perplexity currently peddle into reality. Seems far-fetched ATM but the gains since GPT-2 have been very real.

We're quite a ways away from this though, even with Opus 4.6 and the like. And even further from it being part of Claude Code rather than some proprietary $1000/mo. closed-source solution.

As you say though, _if_ such a technology were to exist, it's Anthropic that holds all the cards, not random entrepreneur #25721 who is asking the Anthropic API the same thing that the actual customer could just be asking directly. At that point you're an undesirable middleman, not a business.


It’s funny how so much of market demand ends up just ends up boiling down to basic needs. Everyone’s always trying to hustle so they don’t have to worry about financial instability.

The quote about being temporarily embarrassed millionaires comes to mind….


> You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then a) everyone can do it, so you won't actually have any value to propose

Louder for those at the back.


  > You can't just buy a Claude subscription and have magically solve your problems.
Devil's advocate: The operating intelligence of Opus 4.6 is higher than the average persons and has orders of magnitude more domain knowledge.

If Average Joe were to delegate most of their life decisions to the chatbot, it'd probably turn out better, or in the worst case, more informed.


> The operating intelligence of Opus 4.6 is higher than the average persons

Okay how are we measuring this? We can't even quantify intelligence for humans accurately, let alone compare it to machines. Hell, we can't even really define intelligence.

I mean, humans can learn on the fly and progressively, and currently no LLMs are capable of that. Literally none of them, and no context doesn't count. So if that's the measure, then LLMs sit at a 0 along with rocks and twigs and humans closer to a 1.

Obviously that's not really the measurement, LLMs are quite good. But I don't think we can say, for sure, LLMs are a replacement for humans. They might replace some specific tasks, but humans are not a set of tasks. I'd still rather have 10 engineers than 0 engineers and 10 Claude Code licenses.


> If Average Joe were to delegate most of their life decisions to the chatbot, it'd probably turn out better, or in the worst case, more informed.

Even if true, no one is ever going to do that because of Dunning Kruger, so it's still not magically solving problems.


A great AI future is the robots doing stuff so we can be free. But none of the major isms are geared up to provide that i.e. capitalism or communism. Maybe hackable with UBI and capitalism mix.


> I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?

Fake it till you make it mentality that degenerated completely once we got the internet. It used to be "crypto will make you rich, buy my coin/course", now it's "AI will make you rich buy my tool/course", the same type of people will get fleeced

These are the people getting all the attention: https://www.youtube.com/watch?v=NwaUMBQ3Wgg


That’s what I really gets me. These folks who are “so rich from said technology” always need you to buy their course for $5,000… Likes buddy if you were bringing in so much money you probably wouldn’t be pestering people to take your “course” and you certainly aren’t going to give any info away that have value only because they are obscure or hard to do… They are also almost ALWAYS self proclaimed experts. Oversight everyone because an AI expert. Before ChatGPT they probably had zero AI was a large field and machine learning is one small part of it..


Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.

https://simonwillison.net/guides/agentic-engineering-pattern...

Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.

"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."

Give me a fucking break


Your reaction is worse than the article. There's no way you could know for sure what their writing process was, but that doesn't stop you from making overconfident claims.


I’m sorry but no attempt was made here. It contains all the red flags in the first few paragraphs.


Sorry but seems like most people don't care or even like AI writing more:

https://x.com/kevinroose/status/2031397522590282212


That's the problem with AI writing in a nutshell. In a blind, relatively short comparison (similarly used for RLHF), AI writing has a florid, punchy quality that intuitively feels like high quality writing.

But then after you read the exact same structure a dozen times a day on the web, it becomes like nails on the chalkboard. It's a combination of "too much of a good thing" with little variation throughout a long piece of prose, and basic pattern recognition of AI output from a model coalescing to a consistent style that can be spotted as if 1-3 human ghost writers wrote 1/4 of the content on the web.


One thing I've learned recently is a lot guys (like here) have been out here reading each word of a given company's tech blog, closely parsing each sentence construction.. I really cant imagine being even concious of the prose for something like this. A corporate blog, to me, has some base level of banality to it. It's like reading a cereal box and getting angry at the lack of nuance.

Like who cares? Is there really some nostalgia for a time before this? When reading some press release from a cybersecurity company was akin to Joyce or Nabakov or whatever? (Maybe Hemingway...)

We really gotta be picking our battles here imo, and this doesn't feel like a high priority target. Let companies be the weird inhuman things that they are.

Read a novel! They are great, I promise. Then when you read other stuff, maybe you won't feel so angry?


I've picked up reading again over the last year or so! Maybe, if anything, that is why I feel so angry. Writing and reading are how we communicate thoughts and ideas between people, humans, at scale. A grand fantasy novel evokes a thirst for adventure, a romance evokes a yearning for true love.

What makes me angry, is to use the feelings we associate with this process and disingenuously pretend that there is a human that wants to tell me something, just for it to be generated drivel.

Don't get me wrong, I don't mind reading AI content, but it should read like this: "Our AI agent 'hacked' (found unexposed API endpoints) x or y company, we asked it to summarize and here's what it said:" - now I know I am about to read generated content, and I can decide myself if I want to engage with it or not. Do you ever notice how nobody that uses AI writing does this? If using AI to produce creative media, including art, music, videos, and writing, is so innocuous, why do all the "AI creatives" so desperately want to hide it from you? Because they don't want you to know that it's generated. Their literal goal is to pretend to have a deeper understanding, a better outlook, on a given topic, than they actually have. I think it is sad for them to feel the need to do this, and sad for me to have to use my limited lifespan discerning it. That is why I am angry.

Anyway, there's no need to "closely parse each sentence construction" at all to identify this post is fully AI generated. It's about as clear as they come. If you have trouble identifying that, well, in the short term you're probably at a disadvantage. In the long term, if AI does ever become able to fully mimic human expression, it won't matter anyway, I guess.

ps: FWIW, I agree with you that of all places, some random AI company with an AI generated website reporting on their AI pentesting with AI is the least surprising thing - the entire company is slop, and it's very easy to see that. My initial post was more of a projection at the dozens of posts I've read from personal blogs in recent weeks where I had to carefully decide if someone's writing that they publish under their own name actually contains original thought or not.


Ah well I guess you are on the right side of this either way! No need to even explain. It seems that people really really do care, and its wrong to say maybe its ok that they don't have to in this case. I guess I get it, I am generally more wrong the right anyway, and yes, at the very least, I am clearly in some way sub literate and uncritical as a reader, who can't tell the difference anyway. Not really the guy to be giving his opinion here. I will go find some slop to enjoy while the adults figure out the important stuff! Thanks for teaching me the lesson here.


Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.


> If the primary appeal of your VR universe is that your avatar can be an anthropomorphic banana, an anime girl, a furry, a giant penis with legs - that's never going to become a 300-million-user platform.

I mean the inherent appeal of VR is self-expression; being who you want to be, seeing the worlds you want to see. You won't get 300 million users with corporate slop either. That maybe works once, if ever, VR headsets become an interface suitable for white collar work, which they currently very much aren't, and then it wouldn't be the next Facebook - it'd be the next Microsoft Teams. Which is not really in line with Meta's other offerings, though they certainly wouldn't say no to it I guess. But I think a 500-user survey is all it would take to get a very clear signal that current VR is NOT about to replace Teams.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: