HN2new | past | comments | ask | show | jobs | submit | skissane's commentslogin

> Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?

Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?

And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.

The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.

If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?


Anthropic doesn't object to fully autonomous AI use by the military in principle. What they're saying is that their current models are not fit for that purpose.

That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.

The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.


Let me put it this way–if Boeing is developing a new missile, and they say to the Pentagon–"this missile can't be used yet, it isn't safe"–and the Pentagon replies "we don't care, we'll bear that risk, send us the prototype, we want to use it right now"–how does Boeing respond?

I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.

Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.


Why are we gauging our ethical barometer on the actions of existing companies and DoD contractors? the military industrial apparatus has been insane for far too long, as Eisenhower warned of.

When we're entering the realm of "there isn't even a human being in the decision loop, fully autonomous systems will now be used to kill people and exert control over domestic populations" maybe we should take a step back and examine our position. Does this lead to a societal outcome that is good for People?

The answer is unabashedly No. We have multiple entire genres of books and media, going back over 50 years, that illustrate the potential future consequences of such a dynamic.


There are two separate aspects to this case.

* autonomous weapons systems

* private defense contractor leverages control over products it has already sold to set military doctrine.

The second one is at least as important as the first one, because handing over our defense capabilities to a private entity which is accountable to nobody but it's shareholders and executive management isn't any better than handing them over to an LLM afflicted with something resembling BPD. The first problem absolutely needs to be solved but the solution cannot be to normalize the second problem.


It is interesting that this is a mainstream existing thing in the US (at the state level), but more of a fringe proposal in the rest of the English-speaking world.

I think the answer may be that the difference in political systems (parliamentary vs presidential) and party systems (less two-party but with greater party discipline) solves many of the problems term limits are intended to solve in completely different ways.

Maybe a better answer would be for US states to adopt the parliamentary system? Although there is some debate about what the "republican form of government" clause means, it arguably doesn't rule out parliamentary republicanism, and Luther v Borden (1849) ruled the clause wasn't justiciable anyway. Added to that, the widespread practice in first half of the 19th century, in which governors were elected by state legislatures, was de facto the parliamentary system. I don't think there is any federal constitutional obstacle to trying this – it is just a political culture issue, it currently sits outside the state constitutional Overton window.

While you could adopt the Australia/Canada model of a figurehead state governor/lieutenant governor with a state premier, I think just having a premier but calling them "the governor" would be more feasible


> Maybe a better answer would be for US states to adopt the parliamentary system?

Maybe. Maybe not. I don’t think it would change outcomes as much as people would think, but to scope limit this back to California again because electoral law discussions just fucking spiral anytime there’s no geographic constraint, the root of California’s lawmaking problems is that the legislature is both poorly structured and poorly balanced against the direct democratic approach we have taken for so much of our lawmaking. I don’t think that’s inherent to the non-parliamentary system we have in place, but a result of incremental rule changes stemming from decades of ballot propositions that are supposed to solve a problem, but don’t and tend to have negative knock-on effects that fly under the radar.

Or put another way: the legislature is for legislating. It doesn’t need a competing power structure, and it doesn’t need to be balanced by anything other than a good functional Executive power and a good functional independent Judiciary. If you have that as your starting point, then maybe there’s room to discuss if there are any real advantages of a Parliamentary system instead.


A very widespread belief among political scientists is that parliamentary systems are superior to presidential systems in terms of stability and quality of governance. In fact, even the US State Department's own "nation-building" advisors tell other countries not to copy the US system (or at least they did prior to Trump, I'm honestly not sure if the Trump admin is sustaining that line or not)

Presidential systems have had a terrible run if you look at Latin America. The US seemed to be an exception to the rule, but maybe recent events have shown that the US got away with a substandard political system for so long because they had so many other advantages to make up for that, now their other advantages are weakening and the US is slowly converging with Latin America


I’m aware of the history, but my point is that as a specific reform to pursue, it’s noise.

If California moves to a Parliamentary system but maintains the popular ballot initiative that has undermined legislative power and allowed legislators to disclaim & dodge responsibility, or maintain the system of term limits I originally called out, then it doesn’t matter whether it’s our current bicameral legislature plus 5 Constitutional officers in the Executive branch or a full on Westminster Parliamentary system or anything in-between: you’ll still run into a lot of the same issues because there are no silver bullets.

So I’m not saying it should never be up for consideration, but as a list of changes to make go? It’s too far down the list of serious considerations for me to view it as anything other than noise right now.


I guess the reality is, all proposed solutions have low odds of success, and their relative probability ranking is debatable.

At least something like "adopt the Australia/Canada model" is easier for people to understand, because while a radical change, they can point to somewhere else that has been doing it successfully for decades. Incremental tinkering with the current rules can make unengaged people mentally switch off by comparison; radical changes can be easier to understand because they can be simpler to explain.

I think one problem with the Australia/Canada model, is even though constitutional monarchy isn't essential to it – both countries could arguably function just as well if they were federal parliamentary republics – many Americans mentally conflate the parliamentary and constitutional monarchy aspects. If eventually either or both countries became republics, that would probably make it easier to sell the idea to Americans.

Germany is a living example of a federal parliamentary republic – but the language barrier limits its accessibility as a model for Anglophone emulation.

One backdoor way it might happen – although no doubt quite unlikely – would be if Alberta seceded from Canada, got admitted as a US state, but kept something close to its current parliamentary system.


The difficulty level of changing the California Constitution is not high, and is dramatically lower than changing the US Constitution. The difficulty is convincing people what to change and why.

Flipping the table because it’s dramatic is not my first choice.


That's an interesting point, but I suspect the biggest barrier isn't constitutional, it's behavioral

> The supply chain risk designation will be overturned in court,

I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.

OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.

And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.

> and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.

I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.


> OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer.

A tweet does not have the force of law. Being designated a supply chain risk does not mean that companies who do business with the government cannot do business with Anthropic. Hegseth just has the law wrong. The government does not have the power to prevent companies from doing business with Anthropic.


The issue is, even if the Trump admin is misrepresenting what the law actually says, federal contractors may decide it is safer to comply with the administration’s reading. The risk is the administration may use their reading to reject a bid. And even if they could potentially challenge that in court and win, they may decide the cheaper and less risky option is to choose OpenAI (or whoever) instead

They would have a very good case against the government if that were to happen. I suspect that the supply chain risk designation will not last long (if it goes into effect).

Some vendors will decide to sue the government. Others may decide that switching to another LLM supplier is cheaper and lower risk.

And I'm not sure your confidence in how the courts will rule is justified. Learning Resources Inc v Trump (the IEEPA tariffs case) proves the SCOTUS conservatives – or at least a large enough subset of them to join with the liberals to produce a majority – are willing sometimes to push back on Trump. Yet there are plenty of other cases in which they've let him have his way. Are you sure you know how they'll judge this case?


> Are you sure you know how they'll judge this case?

I'm not even sure it will get that far. There's a million different ways that this could go that mean it won't ever come before the supreme court. The designation isn't even in effect yet.

I do think if it goes into effect it will be eventually overturned (Supreme Court or otherwise) There just isn't a serious argument to make that they qualify as a supply chain risk and there is no precedent for it.


This tweet (from Under Secretary of State Jeremy Lewin) explains it:

https://x.com/UnderSecretaryF/status/2027594072811098230

https://xcancel.com/UnderSecretaryF/status/20275940728110982...

The OpenAI-DoW contract says "all lawful uses", and then reiterates the existing statutory limits on DoW operations. So it basically spells out in more detail what "all lawful uses" actually means under existing law. Of course, I expect it leaves interpreting that law up to the government, and Congress may change that law in the future.

Anthropic wanted to go beyond that. They wanted contractual limitations on those use cases that are stronger than the existing statutory limitations.

OpenAI has essentially agreed to a political fudge in which the Pentagon gets "all lawful uses" along with some ineffective language which sounds like what Anthropic wanted but is actually weaker. Anthropic wasn't willing to accept the fudge.


Well, or just the possibility of future-proofing the agreement in favor of the US government, as well as walking back the slippery slope of „no autonomic lethality” and „no mass surveillance”.

The former, grants the Congress the ability to change the definition of all „lawful use” as democratically mandated (if the war is officially declared, if the martial law is officially declared).

The latter, is subtle. There can exist a human responsibility for lethal actions taken by fully autonomous AI - the individual who deploys it, for instance, can be made responsible for the consequences even if each individual „pulling of a trigger” has no human in the loop (Dario’s PoV unacceptable).

Similarly, and less subtly, acceptance of foreign mass surveillance, domestic surveillance (as long as its lawful and not meeting the unlawful mass surveillance limits!) seems to be more in the Pentagon’s favor.

Whether we like it or not, we’re heading into some very unstable time. Anthropic wanted to anchor its performance to stable (maybe stale) social norms, Pentagon wanted to rely on AI provider even as we change those norms.


"All lawful uses" has no meaning when a malignant narcissistic sociopath in power controlled by ruthless rich psychopaths can now rewrite every law at will.

Because the US government has such a great track record on ensuring that this kind of stuff is only done legally with the utmost integrity. /s

> If Trump can do this to Anthropic, a Dem President will do it to xAI. We have no idea where the contagion stops.

Will the next Democratic President do it to xAI? On what grounds?

The Biden admin negotiated a contract with a supplier with terms which are – to the best of my knowledge – rather unprecedented – do Pentagon contracts normally have terms like this, restricting the government's use of the supplied good or service? Do missile or plane contracts with Boeing or Lockheed Martin contain restrictions on what kind of operations that hardware will be used in? I don't think that's the norm. So the next administration tears up a contract made by the previous admin with unusual terms – nothing unexpected about that. The "hardball" of declaring them a "supply chain risk" is escalating this dispute to a never-before-seen level, but the underlying action of cancelling the contract isn't. I honestly suspect the "supply chain risk" aspect will be suspended by the courts, and/or heavily watered down in the implementation; but the act of cancelling the contract in itself seems legally airtight.

Next Democratic administration inherits a contract with xAI (and quite possibly OpenAI and/or Google too) – with presumably standard terms. I can totally understand the political desire for vengeance. But what's the actual legal justification for it? Facially, the current administration has a politically neutral justification for what they are doing, even if some suspect there is some deeper political motivation. Will the next Democratic administration have such a facial justification for doing the same to xAI?

Plus, Democrats always sell themselves on "we obey norms". They have the structural disadvantage that either they keep their word on that, and can't do the same things back, or they break their word, and risk losing the people who supported them based on that word.


> Will the next Democratic President do it to xAI? On what grounds?

Elon being affiliated with Trump. About the strength of logic that makes Dario woke.

> don't think that's the norm

Norms are different from law or contract. And yes, lots of service providers limit where their civilians can be deployed and under what circumstances.

> can totally understand the political desire for vengeance. But what's the actual legal justification for it?

President has core Constitutional control of the military.

> Democrats always sell themselves on "we obey norms"

That hasn't worked. The American electorate is looking for change. And up-and-coming Democrats are picking up on that.

> risk losing the people who supported them based on that word

The Democrat base absolutely wants vengeance. It doesn't play in swing states. But it probably also doesn't hurt. These are court politics, at the end of the day.


> Elon being affiliated with Trump. About the strength of logic that makes Dario woke.

I think you have to distinguish between the official justification and some of the associated political rhetoric.

Official justification: "Previous admin agreed contract with unprecedented terms, we demand those terms be removed, vendor is refusing to renegotiate"

Political rhetoric: "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!"

If you forget about the political framing, and look at the official justification in the abstract, it doesn't actually seem facially unreasonable. The escalation to "supply chain risk" is a different story, but the core contract dispute and cancelling the contract as a result of it isn't.

So the question is, can Democrats come up with an equivalent abstract official justification–if so, what will it be? Or do they decide they don't even need that–in which case they aren't just matching Trump, they are going even further down the road to normlessness than he's gone.

> And yes, lots of service providers limit where their civilians can be deployed and under what circumstances.

There's a big difference between contracts for boots-on-the-ground and contracts for hardware/software. There is lots of precedent for contractual limitations on how boots-on-the-ground can be used. I'm not aware of similar precedent for hardware or software.

> That hasn't worked. The American electorate is looking for change. And up-and-coming Democrats are picking up on that.

Are they? Gavin Newsom? Zohran Mamdani? AOC? Do they actually sell themselves as "we see Trump breaking the rules, and we'll break them just as hard, even moreso"?

> The Democrat base absolutely wants vengeance. It doesn't play in swing states. But it probably also doesn't hurt.

It is too early to tell. You can argue in the abstract that X approximately equals Y, so if swing voters will tolerate the GOP doing X, they'll also tolerate Democrats doing Y – but the actual swing voters might not agree with you on that.


> @dang

@dang doesn’t actually notify anybody. It isn’t guaranteed dang will see it

Email to hn@ycombinator.com, someone will see it


I sometimes go in the opposite direction - generate LLM output and then rewrite it in my own words

The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice


This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.

I do something similar: give it a bunch of ideas I have or a general point form structure, have it help me simplify and organize those notes into something more structured, then I write it out myself.

It's a fantastic editor!


that's a perfect use, imhno, of AI-assisted writing. Someone (er-something) to help you bounce ideas, and organize....

> Even if you can argue the British didn't deliberately cause famine over their subjects, they almost never took active steps to alleviate them.

They sent Protestant missionaries with free food for kids (souperism). Private charities, but the government used them as an excuse to not provide more government aid.

And a lot of Catholic parents decided they’d rather their children be dead than risk them becoming Protestant.


This isn't a lock

It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.

If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.


This view is too simplistic. AIs could enable someone with moderate knowledge to create chemical and biological weapons, sabotage firmware, or write highly destructive computer viruses. At least to some extent, uncontrolled AI has the potential to give people all kinds of destructive skills that are normally rare and much more controlled. The analogy with the hammer doesn't really fit.

Given the increasing use of them as agents rather than simple generators, I suggest a better analogy than "hammer" is "dog".

Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991


How many people do dogs kill each year, in circumstances nobody would justify?

How many people do frontier AI models kill each year, in circumstances nobody would justify?

The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.


> How many people do frontier AI models kill each year, in circumstances nobody would justify?

Dunno, stats aren't recorded.

But I can say there's wrongful death lawsuits naming some of the labs and their models. And there was that anecdote a while back about raw garlic infused olive oil botulism, a search for which reminded me about AI-generated mushroom "guides": https://hackernews.hn/item?id=40724714

Do you count death by self driving car in such stats? If someone takes medical advice and dies, is that reported like people who drive off an unsafe bridge when following google maps?

But this is all danger by incompetence. The opposite, danger by competence, is where they enable people to become more dangerous than they otherwise would have been.

A competent planner with no moral compass, you only find out how bad it can be when it's much too late. I don't think LLMs are that danger yet, even with METR timelines that's 3 years off. But I think it's best to aim for where the ball will be, rather than where it is.

Then there's LLM-psychosis, which isn't on the competent-incompetent spectrum at all, and I have no idea if that affects people who weren't already prone to psychosis, or indeed if it's really just a moral panic hallucinated by the mileau.


> Let's ignore the words of a safety researcher from one of the most prominent companies in the industry

I think "safety research" has a tendency to attract doomers. So when one of them quits while preaching doom, they are behaving par for the course. There's little new information in someone doing something that fits their type.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: