HN2new | past | comments | ask | show | jobs | submit | inkysigma's commentslogin

I'm assuming this is satire but I'm wondering why include names of seemingly random people? Why not leave it empty or make it signed by high level known executives.

Good idea, I just changed it to execs. I had originally filled it with placeholder names.

Right? These people are just following orders!

I mean in the sense that they seem to literally be random names. I don't even think they're people associated with Palantir in anyway.

Once that side channel was found, it was kind of inevitable it would be plugged. Even under a normal administration, that's an opsec leak.

Seriously. They can put a Burger King anywhere on the planet in 24 hours, but can't do their own pizza at the Pentagon?

Passkeys are an open standard? You might as well argue against SSH keys.

The standard includes a hardware attestation path.

That’s the backdoor allowing the eventual takeover of your OS.

First people use passkeys, and they become standard.

Then they become required for important accounts for security.

Then the important accounts require the attestation bit.

At that point, you cannot run web browsers on open source operating systems.

This is all boring and predictable. It is exactly what they did with Android, and exactly the same organizations are pushing passkeys.

Note: If they had good intentions, the operating system would manage any attestation, and not allow websites to query for or require attestation support.


The attestation actually has nothing to do with the browser, only the holder of the passkey's key material. You can satisfy the attestation by having a passkey on your Android device and doing the normal Bluetooth flow with your Firefox browser on your Framework laptop. So this mechanism is totally useless for enacting this plan.

The operating system doesn't manage attestation because that's totally useless for the stated goal of the attestation system. Enterprises don't want their SaaS vendors to accept passkeys from some random employee's BitWarden, instead of the hardware keys they issued the employee. If the OS manages attestation and doesn't send anything to the relying party, then it doesn't solve anybody's problem at all.


It seems like it will only be a matter of time before consumer sites start requiring a patched OS with an attestation bit set in the key.

Also, as I understand it, sites can whitelist credential hardware.

If not, then the attestation is security theater. I (or an attacker on your machine), can just make a sw emulator of a hw attestation device, and use that to protect my choice of OS, (and skim your credentials).

If a whitelist exists, then my “hijack your OS” plan works: Require the builtin macos/windows/signed chrome on signed os password managers. That’s 90% of the market (and dropping) right now.


As I said, the attestation structurally does NOT attest to your OS or your browser that are displaying the website performing the authentication. It attests to the device that holds the passkey's key material, which is usually not your desktop computer.

The attestation is in fact readable by the FIDO Platform (the browser/OS). It is not encrypted to be readable only by the RP (web site).

It talks about whatever you used to authenticate and the platform can manipulate (or omit) it.


Yes, but the attestation does not tell the RP anything about the browser. The whole point of the nightmare scenario above was for Google to sneak browser attestation in via passkey attestation. The browser being able to see the attestation doesn’t matter for that.

Does Firefox support the Bluetooth flow on Linux at this time?

That's a matter of implementing an open standard. Google hasn't done anything to prevent open source browsers and OSes from implementing it, and nothing in the spec makes it difficult for Firefox/Linux specifically AFAICT.

I do not want any business with Apple/Google/Microsoft at all, including owning an Android or iPhone for hardware attestation.

You don't need to use anything from Apple/Google/Microsoft. Passkeys are just WebAuthn which is an open standard.

An open standard that has attestation in it which allows sites to block all open implementations. FIDO Alliance spec writers have even threatened that apps like KeepPassXC could be blocked in the future because they allow you to export your keys.

That standard also allows for importing and exporting passkeys. Apple added that in iOS/macOS/etc 26 to their platforms. https://9to5mac.com/2025/06/13/ios-26-passkeys-password-tran...

The export is end to end encrypted, so you do not have ownership of the data, and the provider (Apple in this case) has full control over who you are allowed to export your keys to. (Notice how there are no options to move your keys to a self-hosted service.)

> The standard includes a hardware attestation path.

Yes, and iOS and Android's Passkey implementation does not support it, since doing so would be lying about a given credential being hardware-backend when it's actually not (due to being cloud-synced and often recoverable via some process).

Attestation is only for hardware authenticators, either dedicated ones like Yubikeys or non-synchronized Android WebAuthN credentials. (iOS only supports them in MDM contexts anymore, I believe.)


What are you talking about? Google employees and the corporation itself in particular overwhelmingly donated to the Harris campaign.

https://www.opensecrets.org/orgs/alphabet-inc/recipients?id=...

The corporation gave millions _after_ Trump had already won. If your criticism is that, then that does not apply to the people signing.


I think the bigger insanity here is the labeling of a supply chain risk. It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic. It's another when it actively attempts to isolate Anthropic for political reasons.

It means that all companies contracting with the government have to certify that they don't use Anthropic products at all. Not just in the products being offered to the government.

This is a massive body slam. This means that Nvidia, every server vendor, IBM, AWS, Azure, Microsoft and everybody else has to certify that they don't do business directly or indirectly using Anthropic products.


Microsoft, Azure, AWS, Nvidia and IBM all have deals with other providers for AI. That itself doesn't turn the needle.

I think the point is that would be catastrophic for Anthropic.

Who cares about Anthropic? That's the guys who are pushing for regulations to prevent people from using local models. The earlier they are gone the better

Are they? I couldn't find any info about this and my past perception has been that Anthropic has a stronger moral codex than other AI companies, so I would be genuinely interested in where you got this information from.

"First they came for Anthropic, and I said nothing because fuck those guys I guess."

First they came for Anthropic in spite of the fact that Anthropic tried so hard to make them come for local models first.

Going by what Hegseth said, it bans them from relationships or partnering with Anthropic at all. No renting or selling GPUs to them; no allowing software engineers to use Claude Code; no serving Anthropic models from their clouds. Probably have to give up investments; Amazon alone has invested like $10B in Anthropic.

It bans them from using all open source software unless they have signed an agreement with the developer to prohibit use of Claude Code.

What open source software ? Anthropic doesn't make open source software?

All open source software, because the developers might use Claude Code.

Nvidia can also say no, they won't have choice but yield or not have AI at all

Its a government department signalling who's boss.

> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.

This is literally the mechanism by which the DoD does what you're suggesting.

Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.

Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.


That doesn’t sound right. Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.

Let me put it this way: DoD needs a new drone and they want some gimmicky AI bullshit. They contract the drone from Lockheed. Lockheed is not allowed to source the gimmicky AI bullshit from Anthropic because they have been declared a supply-chain risk on the basis that they have publicly stated their intention to produce products which will refuse certain orders from the military.

Let’s put it this way, The DoD is buying pencils from a company. Should that company be prohibited from using Claude?

You are confusing the need to avoid Anthropic as a component of something the DoD is buying, with prohibitions against any use.

The DoD can already sensibly require providers of systems to not incorporate certain companies components. Or restrict them to only using components from a list of vetted suppliers.

Without prohibiting entire companies from uses unrelated to what the DoD purchases. Or not a component in something they buy.


There seems to be a massive misunderstanding here - I'm not sure on whose side. In my understanding, if the DoD orders an autonomous drone, it would probably write in the ITT that the drone needs to be capable of doing autonomous surveillance. If Lockheed uses Anthropic under the hood, it does not meet those criteria, and cannot reasonably join the bid?

What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.

And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?


> Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?

Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?

And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.

The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.

If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?


Anthropic doesn't object to fully autonomous AI use by the military in principle. What they're saying is that their current models are not fit for that purpose.

That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.

The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.


Let me put it this way–if Boeing is developing a new missile, and they say to the Pentagon–"this missile can't be used yet, it isn't safe"–and the Pentagon replies "we don't care, we'll bear that risk, send us the prototype, we want to use it right now"–how does Boeing respond?

I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.

Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.


Why are we gauging our ethical barometer on the actions of existing companies and DoD contractors? the military industrial apparatus has been insane for far too long, as Eisenhower warned of.

When we're entering the realm of "there isn't even a human being in the decision loop, fully autonomous systems will now be used to kill people and exert control over domestic populations" maybe we should take a step back and examine our position. Does this lead to a societal outcome that is good for People?

The answer is unabashedly No. We have multiple entire genres of books and media, going back over 50 years, that illustrate the potential future consequences of such a dynamic.


There are two separate aspects to this case.

* autonomous weapons systems

* private defense contractor leverages control over products it has already sold to set military doctrine.

The second one is at least as important as the first one, because handing over our defense capabilities to a private entity which is accountable to nobody but it's shareholders and executive management isn't any better than handing them over to an LLM afflicted with something resembling BPD. The first problem absolutely needs to be solved but the solution cannot be to normalize the second problem.


But parent is right, both Lockheed and the pencil maker will have to cease working with Anthropic over this.

> Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.

Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.

Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.


MIGHT be overreach to call this a supply chain risk?!? That is absolutely ludicrous.

To quote one of the greatest movies of all time: That’s just, like, your opinion, man.

You're making it sound like this is commonly practiced and a standard procedure for the DoD, yet according to Anthropic,

>Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.

Some very brief googling also confirmed this for me too.

>Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.

This statement misses the point. The political punishment to disallow all US agencies and gov contractors from using Anthropic for _any _ purpose, not just domestic spying, IS the retaliation, and is the very thing that's concerning. Calling it "DoD vendor exclusion list" or whatever other placating phrase or term doesn't change the action.


>an unprecedented action

it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma. Just because silicon valley gets away with bullying the consumer market with mandatory automatic updates and constantly-morphing EULAs doesn't mean they're entitled to take that attitude with them when they try to join the military industrial complex. Actually they shouldn't even be entitled to take that attitude to the consumer market but sadly that battle was lost a long time ago.

>for _any _ purpose

they're allowed to use it for any purpose not related to a government contract.


> it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma

That is a deeply deceptive description of what happened. Anthropic was clear from the beginning of the contract the limitations of Claude; the military reneged; and beyond cancelling the contract with Anthropic (fair enough), they are retaliating in an attempt to destroy its businesses, by threatening any other company that does business with Anthropic.


>Anthropic was clear from the beginning of the contract the limitations of Claude

No, that's not what they said.

"Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now".


It’s not clear to me that the AI itself will refuse. You could build a system where AI is asked if an image matches a pattern. The true/false is fed to a different system to fire a missile. Building such a system would violate the contract, but doesn’t prevent such a thing from being built if you don’t mind breaking a contract.

I'm not completely familiar with bidding procedures but don't bidding procedures usually have requirements? Why not just list a requirement of unrestricted usage? Or state, we require models to be available for AI murder drones or whatever. Anthropic then can't bid and there's no need to designate them a supply chain risk.

> Anthropic then can't bid

Thing is that very much want access to Anthropic's models. They're top quality. So that definitely want Anthropic to bid. AND give them unrestricted access.


And yet Anthropic is free to choose who to do business with, including the government. There are countless companies who have exclusions for certain applications, but that does not make them a supply chain risk.

> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.

But that's what the supply-chain risk is for? I'm legitimately struggling to understand this viewpoint of yours wherein they are entitled to refuse to directly purchase Anthropic products but they're not entitled to refuse to indirectly purchase Anthropic products via subcontractors.


Supply chain risk is not meant for this. The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.

It's the same as Trump claiming emergency powers to apply tariffs, when the "emergency" he claimed was basically "global trade exists."

Yes, the government can choose to purchase or not. No, supply chain risk is absolutely not correct here.


> The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.

You might be completely right about their real motivations, but try to steelman the other side.

What they might argue in court: Suppose DoD wants to buy an autonomous missile system from some contractor. That contractor writes a generic visual object tracking library, which they use in both military applications for the DoD and in their commercial offerings. Let’s say it’s Boeing in this case.

Anthropic engaged in a process where they take a model that is perfectly capable of writing that object tracking code, and they try to install a sense of restraint on it through RLHF. Suppose Opus 6.7 comes out and it has internalized some of these principles, to the point where it adds a backdoor to the library that prevents it from operating correctly in military applications.

Is this a bit far fetched? Sure. But the point is that Anthropic is intentionally changing their product to make it less effective for military use. And per the statute, it’s entirely reasonable for the DoD to mark them as a supply chain risk if they’re introducing defects intentionally that make it unfit for military use. It’s entirely consistent for them to say, Boeing, you categorically can’t use Claude. That’s exactly the kind of "subversion of design integrity" the statute contemplates. The fact that the subversion was introduced by the vendor intentionally rather than by a foreign adversary covertly doesn’t change the operational impact.


I would hope the DoD would test things before using them in the theater of war.

But there will always be deficiencies in testing, and regardless, the point is that Anthropic is intentionally introducing behavior into their models which increases the chance of a deficiency being introduced specifically as it pertains to defense.

The DoD has a right to avoid such models, and to demand that their subcontractors do as well.

It’s like saying “well I’d hope Boeing would test the airplane before flying it” in response to learning that Boeing’s engineering team intentionally weakened the wing spar because they think planes shouldn’t fly too fast. Yeah, testing might catch the specific failure mode. But the fact that your vendor is deliberately working against your requirements is a supply chain problem regardless of how good your test coverage is.


The rule in question is exactly meant for “this”, where “this” equals ”a complete ban on use of the product in any part of the government supply chain”. That’s why it has the name that it has. The rule itself has not been misconstrued.

You’re really trying to complain that the use of the rule is inappropriate here, which may be true, but is far more a matter of opinion than anything else.


You keep trying to say this all over these comments but this isn’t how the law works, at all.

I fully understand that they are using it to ban things from the supply chain. The law, however, is not “first find the effect you want, then find a law that results in that, then accuse them of that.”

You can’t say someone murdered someone just because you want to put them in jail. You can’t use a law for banning supply chain risks just because you want to ban them from the supply chain.

This isn’t idle opinion. Read the law.


> but this isn’t how the law works, at all.

Not sure what you think “the law” is, but no, this kind of thing happens all the time. Both political teams do it, regularly. Biden, Obama, Bush, Clinton…all have routinely found an existing law or rule that allowed them to do what they want to do without legislation.

> The law, however, is not “first find the effect you want, then find a law that results in that, then accuse them of that.”

In this case, no, there’s no such restriction. The administration has pretty broad discretion. And again, this happens all the time.

Sorry, it sucks, but if you don’t like it, encourage politicians to stop delegating such broad authority to the executive branch.


It doesn't harm national security, but only so long as it's not in the supply-chain. They can't have Lockheed putting Anthropic's products into a fighter jet when Anthropic has already said their products will be able to refuse to carry out certain orders by their own autonomous judgement.

The government can refuse to buy a fighter jet that runs software they don't want.

Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails?


That's not what anthropic said. They said their products won't fire autonomously, not that they will refuse when given order from a human.

"Hey Claude I need you to use this predator drone to go blow up everybody who looks like a terrorist in the name of Democracy."

I’m not sure if you deliberately choose to not understand the problem. It’s not just that Lockheed can’t put Anthropic AI in a fighter jet cockpit, it’s that a random software engineer working at Lockheed on their internal accounting system is no longer allowed to use Claude Code, for no reason at all. A supply chain risk is using Huawei network equipment for military communications. This is just spiteful retaliation because a company refuses to throw its values overboard when the government says so.

I think those are two different orders: one with a gag order and one without.

In cases without gag orders, Google has pushed back or requested users fight the subpoena.

In this instance, Google got a gag order while Meta doesn't appear to have gotten one. I'm not sure how gag orders like this can be legal. I'm sure there's like Nat Sec defenses but it sure seems dangerous to say the target cannot be notified of such requests.


I think because here there's no single correct answer that the model is allowed to be fuzzier. You still mix in real training data and maybe more physics based simulation of course but it does seem acceptable that you synthesize extremely tail evaluations since there isn't really a "better" way by definition and you can evaluate the end driving behavior after training.

You can also probably still use it for some kinds of evaluation as well since you can detect if two point clouds intersect presumably.

In much a similar way that LLMs are not perfect at translation but are widely used anyway for NMT.


Would it actually be a good idea to operate a car near an active tornado?


It’s autonomous!


Kinda yeah, they tend to always travel northeast


The tornado?


ML models doesn't have fight or flight, so we'll have to show them tornado and teach to run away.


Maybe this is why execs love LLMs a little too much? It's probably not unconnected at the very least.


Are LLMs slowly evolving to be more appealing to narcissistic personalities?

Given the dynamics here that could be the main selection pressure on them.


I don't think the environment being cool is a factor in current data center designs is it? Otherwise, the northern US or Alaska would be candidates. Instead, a lot of the data center boom is in states like Texas or the south.

I think some interviewer with Trump did actually ask him the question you posed and he said something to the effect of "ownership is important" for him _personally_ not necessarily for the _US_ which is the a ridiculous thing to hear from a leader of a country.


Iceland based data centers are able to cut their energy usage for cooling by 24-31% compared to US/UK equivalent due to the climate [0].

[0]: https://eandt.theiet.org/2022/12/12/iceland-coolest-location...


> I don't think the environment being cool is a factor in current data center designs is it? Otherwise, the northern US or Alaska would be candidates. Instead, a lot of the data center boom is in states like Texas or the south.

It is increasingly becoming so. And some designs work well. I only read a post about the internet archive's smart use of the server heat a couple days ago, I can't find it back now. And indeed, good point. Alaska would be great for that too.

And the US is kinda an exception, the rest of the world is watching emissions but the US is trying to screw the world up for everyone else. Including themselves but Trump followers seem to view all the disasters as an 'act of god'. I remember those poor school kids in the flooding in texas last year and there being more 'thoughts and prayers' than actual help or prevention.

I know Ireland is popular for datacenters in part because of the climate there (in another big part all the tax breaks but ok).

And yes you can cool them with renewable energy. Most datacenters are. But it also means that renewable energy can't be used for something else.


>I think some interviewer with Trump did actually ask him the question you posed and he said something to the effect of "ownership is important" for him _personally_ not necessarily for the _US_

Does he actually know the difference between "mine" and "the US'" though? I was under the assumption that since the US is his, anything important for him is also important for the US, and vice versa.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: