"claude -p" does not charge api rates by itself, I just ran "claude -p 'write hello world to foo.txt'", and it didn't.
What they changed is that if you have OpenClaw run 'claude -p' for you, that gets your account banned or charged API rates, and if they think your usage of 'claude -p' is maybe OpenClaw, even if it's not, you get charged API rates or banned.
It seems so silly to me. They built a feature with one billing rate, and the feature is a bash command. If you have a bad program run the bash command, you get billed at a different rate, if you have a good script you wrote yourself run it, you're fine, but they have literally no legitimate way to tell the difference since either way it's just a command being run.
The justification going around is that OpenClaw usage is so heavy that it impacts the service for other people, but like OpenClaw was just using the "claude code max" plan, so if they can't handle the usage the plan promises, they should be changing the plan.
If they had instead said "Your claude code max plan, which has XX quota, will get charged API rates if you consistently use 50% of your quota. The quota is actually a lie, it's just the amount you can burst up to once or twice a week, but definitely not every day" and just banned everyone that used claude code a lot, I wouldn't be complaining as much, that'd be much more consistent.
It only switches to charging API rates if some part of your prompt triggers their magic string detector. Lot of examples of that floating around where swapping "is" for "are" or whatever will magically allow the request against your subscription plan again.
I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
The idea that it's not okay to arm the military is a position of privilege. The ethical issues are around how the military chooses to use its abilities, not around giving them the tools to do their jobs. We're talking about folks who are willing to give their lives up for others. If you're not going to serve yourself you should at least be willing to help them live. This has nothing to do with whether or not you support the political uses of the military. If world war 3 breaks out and you are forced to serve, you may find yourself feeling differently.
Yes and... that's a position of privilege that anyone in the position should ethically take.
It's unfair to sweep provision of methods to the military under a "respect the service" catch-all justification.
Two things can simultaneously be true: (1) individuals serving in the military are making sacrifices (in terms of pay, family life, personal safety) that deserve respect and (2) the military as a political institution will amorally deploy whatever capabilities it has access to, to achieve political aims.
There's a reason the US stopped offensive chemical, biological warfare, and tactical nuclear device research and production -- effective capabilities will be used if they exist.
With respect to the weapons programs, I'm not a historian, but I was not under the impression that the US stopped development of these weapons unilaterally or out of good will. My understanding is that it was due to a mixture of not perceiving a need or use for the capabilities, along with formal or informal international cooperation eliminating the need for deterrence.
Just a couple of thoughts since it seems like the next issues in this space are rapidly arriving or already here.
As far as I've read the literature from the 60s and 70s, tactical nukes were eventually eliminated in order to assuage western Europe's concerns that large portions of their countries would be turned into irradiated wastelands for decades / centuries if war erupted between the US and USSR.
It was also the product of perceived overmatch on both sides -- the Soviets believed they had superior mass of armored formations (and they did), while the US and allies believed they had technological supremacy (and they did). Ergo, neither needed tactical nukes.
It didn't hurt that it helped both in the eyes of the then vehemently anti-nuclear European movements.
Offensive bio and chemical weapon limitation is a more nuanced decision.
In both cases, their primary use was either local mass lethality or terrain denial, neither of which were important in the then-gelling American doctrines of maneuver.
The sole use case they seemed viable for was industry denial (e.g. contaminate a high capital cost industrial center), a task at which strategic sized nuclear weapons were equally adept (and more easily stored). So, if you had to have strategic nuclear weapons for deterrence, and they were capable of the same task, why have fiddly bio and chemical weapons?
But in both cases there was also a constant radiant pressure of scientists and the public campaigning against them, and being unwilling to work on or tolerate them.
Absent that, who knows how history would have turned out? Normalization is a powerful opinion shifter.
I'd feel much better about supporting military actions of the people that are becoming part of that system if they exercised some fucking free will and not follow criminals in our government into wars that do not support our people, or our country. We have a serious problem in our government and it being connected in anyway with what is happening in that institution gives me great pause in believing in people of this country. People are stupid to not be fight this government tooth and nail.
Many ASR models already support prompts/adding your own terminology. This one doesn't, but full LLMs especially such expensive ones aren't needed for that.
For those experiencing 403 errors when accessing certain models:
We're required to comply with the terms of service of our upstream model providers. Our enforcement mechanisms and regional access rules are updated continuously. We understand that enforcement changes are disruptive for people who didn't realize their usage was in violation. You can find the relevant policy in the Prohibited Content section of our [Terms of Service](<https://openrouter.ai/terms#_6_-prohibited-conduct_>).
This is not a ban on using OpenRouter. We will keep working hard to add all the best models so everyone has great options to choose from, no matter where you are.
If you are seeing a 403 author banned error and believe you are not in violation of the Terms of Service of the AI Model or Provider you are using, please fill out the [appeal form](<https://forms.gle/yc2vyJiALz8Uhbmh7>) with detailed information. We will review these and correct any mistaken restrictions.
Openrouter just practically killed hundred of services in a day, this must be illegal, ton of services have been built around Openrouter to provide API access to end-consumers (at least to my knowledge).
Their ToS now cite:
`access the Site or Service for purposes of reselling API access to AI Models or otherwise developing a competing service;`
It’s not like you ever owned anything when you built something on top of these sorts of services.
I think it is clear that there is no point providing AI based services via 3rd party AI. Openrouter may even end up with a similar fate if the upstream providers make a similar ToS change. I’ve always thought of Openrouter as a useful tool for development and chat that lets me add a zoo of models quickly. Anything relatively close to production? Fix a model version and use a providers API, for as long as that’s supported.
same argument could apply for AWS tho, you wouldn't expect them to just cut you off and all your sub-customers because you are leveraging the infrastructure YOU PAY for your users...
My guess is that probably not for Muon. What I said about ADAM was partly based on this blogpost I read some time ago, should have cited it as well [0].
The thing about Muon is that it doesn't have this specific feature of ADAM that causes it to "move along the diagonal". Basically if you flatten weights as a huge vector of a few billion elements. SGD moves along the gradient, which isn't biased. ADAM normalizes everything elementwise, so it sort of moves along a vector of +-1.
This isn't a proof or anything, but what you can imagine might be happening is that if you move along +-1, then you find spikey solutions somehow. Not sure how to prove that. Muon doesn't really do this, but it has its own sort of funky reshaping of the update (it moves along low rank directions).
What they changed is that it now uses extra usage, which is charged at api rates
reply