Is there a leading American AI research organization - big tech or academia - that isn't "full of Chinese Nationals"? If the DoD want an all-American SoTA model, they may have to wait for a while.
What Anthropic was complaining about is training on mass-elicited chat logs. It is very much a ToS violation (you aren't allowed to exploit the service for the purpose of building a competitor) so the complaint is well-founded but (1) it's not "distillation" properly understood; it can only feasibly extract the same kind of narrow knowledge you'd read out from chat logs, perhaps including primitive "let's think step by step" output (which are not true fine-tuned reasoning tokens); because you have no access to the actual weights; and (2) it's something Western AI firms are very much believed to do to one another and to Chinese models all the time anyway. Hence the brouhaha about Western models claiming to be DeepSeek when they answer in Chinese.
So they're paying expensive input tokens to extract at best a tiny amount of information ("judgment") per request? That's even less like "distillation" than the other claim of them trying to figure out reasoning by asking the model to think step by step.
LLM-as-a-judge is quite effective method to RL a model, similar to RLHF but more objective and scalable. But yes, anthropic is making it more serious than it is. Plus DeepSeek only did it for 125k requests, significantly less than the other labs, but Anthropic still listed them first to create FUD.
Task Time horizons are improving exponentially with doubling times around 4 months per METR. At what timescale would you accept that they "can be strategic"? Theres little reason to think they wont be at multi week or month time horizons very soon. Do you need to be strategic to complete multi month tasks?
>Can an LLM give you an upfront estimate that a task will take multiple months?
>Can it decide intelligently what it would have to change if you said "do what you can to have it ready in half the time?"
Do you think ChatGPT 5.2 Pro can't estimate how long a task might take? Do you think that estimate would necessarily be worse than the estimates, which are notoriously poor, coming from human engineers?
But you can still answer my question. When an LLM can complete a task that takes a person N months or years, is it capable of being strategic?
Well to be fair Nasa isn't nearly as good as it once was. The quality of engineer during the Apollo era was far better and more like what can be found at Spacex
What is that based on? NASA's recent accomplishments are far beyond anyone. Off the top of my head: The many Mars missions, JWST, Europa Clipper (still in progress), etc. SpaceX hasn't left Earth orbit, afaik.
That is only looking at (mostly orbital) launch systems, such a minor part of NASA's R&D and missions that the Obama administration decided to contract it out.
NASA doesn't develop or build lots of technology that has become mature enough that private industry can take it on and NASA can focus on the past-the-bleeding-edge stuff.
Reality? Outside of JPL the talent level at Nasa is frankly very poor. You really want to claim that the current version of Nasa could pull off the Apollo program today?
All of which are peanuts compared to Apollo. Not to mention insane cost overruns and timeline failures on their projects like JWST or SLS for example. Many of those successful projects came out of JPL, where I mentioned the talent level is much higher.
>Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
All I see everywhere is "OpenAI generated $13 billion in revenue in 2025" and it just cost them $8 billion. $5B loss in 2025 and projections of losing $14B this year.
I think the comment was a roundabout way of saying this is a clear market failure. There are more societally important things these people could be doing instead of shaving another ms off a transaction or finding minuscule option pricing inefficiencies. That the market is not correctly remunerating those options is the failure.
> For example, the tariff tantrums caused by trump proposing 100%+ china tariffs where he crashed the markets last spring, leading to a moderation in policy.
"Akshually traders are good bcuz they crash the market when the president does insane things" is not the own you think it is.
> this undervalues how financial engineering allows more ideas and companies to be funded
I think the comment is about the marginal utility of additional workers at Jane St over, perhaps, DE Shaw Research. The caliber and education of roughly the same kind of person might be applied to understanding drug mechanisms, or shaving off trading milliseconds.
Is the marginal benefit to the world greater if someone is advancing financial engineering? I don't think it's obvious that our increased complexity is, itself, yielding further increases in 'allowing more ideas and companies to be funded' except in the sense where already-wealthy people gain more discretionary income which they may decide to spend on their pet projects. Futures have existed for much longer than derivative markets; are we helping farmers more when we allow futures to be traded more quickly?
But I disagree that the limit is funding—it's simply a lack of concerted interest. We accept that we should spend tax money on rewarding certain financial activities, and we create a system that disproportionately rewards people who facilitate these activities. But we might restructure things so people are incentivized to do research instead of financial engineering.
I think the fundamental idea is that things of value need to be extracted or manufactured at some point and we're not set up to reward people studying new extractive tools or new manufacturing processes when those people could instead work on finance products.
I think these are totally different things. HFT firms and Hedge Funds are not "allowing more ideas to be funded". Finance in general can indeed be good but I think its much harder to argue for the net benefit of firms like Jane Street or Citadel.
This is just a weird Trump talking point. This situation is unprecedented on many levels. The pentagon already had a signed contract with these stipulations and wanted to unilaterally renegotiate with Anthropic under threat of deeming them a foreign adversary and destroying their business if they didn't accept the DoD demands. It's totally absurd to turn this around on Anthropic and paint them as trying to determine US Military policy.
reply