HN2new | past | comments | ask | show | jobs | submit | vonneumannstan's commentslogin

Lol most Kuwaitis including the royal family are Sunni and believe Iranian Shia's to be heretics. So no love lost there at all.

All fun and games until an Intern deletes the original sheet or fatfingers critical information.

A bit hard for him to swing if he wants to position xAI as a key Defense contractor for AI and his company is full of Chinese Nationals...

Is there a leading American AI research organization - big tech or academia - that isn't "full of Chinese Nationals"? If the DoD want an all-American SoTA model, they may have to wait for a while.

Maybe true but you probably expect a minimum of leaked information directly to the CCP if you accept that.

AI Research orgs voluntarily do their own "leaking" via the publication of research papers, and involuntarily via employee/post-doc churn.

The guilty-until-proven innocent mindset is what kickstarted China's nuclear program - and led to the internment of people with Japanese heritage.


Were they kneecapped by Anthropic blocking their distillation attempts?

What Anthropic was complaining about is training on mass-elicited chat logs. It is very much a ToS violation (you aren't allowed to exploit the service for the purpose of building a competitor) so the complaint is well-founded but (1) it's not "distillation" properly understood; it can only feasibly extract the same kind of narrow knowledge you'd read out from chat logs, perhaps including primitive "let's think step by step" output (which are not true fine-tuned reasoning tokens); because you have no access to the actual weights; and (2) it's something Western AI firms are very much believed to do to one another and to Chinese models all the time anyway. Hence the brouhaha about Western models claiming to be DeepSeek when they answer in Chinese.

The "distillation attacks" are mostly using Claude as LLM-as-a-judge. They are not training on the reasoning chains in a SFT fashion.

So they're paying expensive input tokens to extract at best a tiny amount of information ("judgment") per request? That's even less like "distillation" than the other claim of them trying to figure out reasoning by asking the model to think step by step.

LLM-as-a-judge is quite effective method to RL a model, similar to RLHF but more objective and scalable. But yes, anthropic is making it more serious than it is. Plus DeepSeek only did it for 125k requests, significantly less than the other labs, but Anthropic still listed them first to create FUD.

Unless you know something we don't, Alibaba hasn't been accused of distilling or stealing any Anthropic assets.

Task Time horizons are improving exponentially with doubling times around 4 months per METR. At what timescale would you accept that they "can be strategic"? Theres little reason to think they wont be at multi week or month time horizons very soon. Do you need to be strategic to complete multi month tasks?

Can an LLM give you an upfront estimate that a task will take multiple months?

Can it decide intelligently what it would have to change if you said "do what you can to have it ready in half the time?"


>Can an LLM give you an upfront estimate that a task will take multiple months?

>Can it decide intelligently what it would have to change if you said "do what you can to have it ready in half the time?"

Do you think ChatGPT 5.2 Pro can't estimate how long a task might take? Do you think that estimate would necessarily be worse than the estimates, which are notoriously poor, coming from human engineers?

But you can still answer my question. When an LLM can complete a task that takes a person N months or years, is it capable of being strategic?


Multiple people have already answered your question in this thread. An LLM can’t be strategic because that’s not a capability of the technology itself

Well to be fair Nasa isn't nearly as good as it once was. The quality of engineer during the Apollo era was far better and more like what can be found at Spacex

What is that based on? NASA's recent accomplishments are far beyond anyone. Off the top of my head: The many Mars missions, JWST, Europa Clipper (still in progress), etc. SpaceX hasn't left Earth orbit, afaik.

If you had your pick of launch systems to work on, I don't believe you would pick any of NASA's platforms since the shuttle.

Their explorer robotics are interesting, something I would be proud to work on; but a pretty different nitch.

So NASA is not drawing from the best people anymore.


That is only looking at (mostly orbital) launch systems, such a minor part of NASA's R&D and missions that the Obama administration decided to contract it out.

NASA doesn't develop or build lots of technology that has become mature enough that private industry can take it on and NASA can focus on the past-the-bleeding-edge stuff.


DART is an example of both an incredible NASA accomplishment and a SpaceX launch that left Earth orbit.

SpaceX launched a Tesla Roadster to Mars orbit.

>What is that based on?

Reality? Outside of JPL the talent level at Nasa is frankly very poor. You really want to claim that the current version of Nasa could pull off the Apollo program today?


Again, what is that based on? NASA has one success after another. JWST, helicopters flying on Mars, Europa Clipper (ongoing), etc. etc. etc.

All of which are peanuts compared to Apollo. Not to mention insane cost overruns and timeline failures on their projects like JWST or SLS for example. Many of those successful projects came out of JPL, where I mentioned the talent level is much higher.

$30B in sales is worth more than $30B in stock appreciation...

>Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.

They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.


> $20B in revenue

All I see everywhere is "OpenAI generated $13 billion in revenue in 2025" and it just cost them $8 billion. $5B loss in 2025 and projections of losing $14B this year.


I think the comment was a roundabout way of saying this is a clear market failure. There are more societally important things these people could be doing instead of shaving another ms off a transaction or finding minuscule option pricing inefficiencies. That the market is not correctly remunerating those options is the failure.

> For example, the tariff tantrums caused by trump proposing 100%+ china tariffs where he crashed the markets last spring, leading to a moderation in policy.

"Akshually traders are good bcuz they crash the market when the president does insane things" is not the own you think it is.


this undervalues how financial engineering allows more ideas and companies to be funded

compound interest is a rare exponential force, and it is available to most citizens of a developed country through the stock market

financial futures remain important for farmers to have predictable pricing, and increase crop yields

science is limited by funding at least as much as it is limited by ideas or intelligence

I understand why finance is a popular bogeyman, but the world is rarely black and white


> this undervalues how financial engineering allows more ideas and companies to be funded

I think the comment is about the marginal utility of additional workers at Jane St over, perhaps, DE Shaw Research. The caliber and education of roughly the same kind of person might be applied to understanding drug mechanisms, or shaving off trading milliseconds.

Is the marginal benefit to the world greater if someone is advancing financial engineering? I don't think it's obvious that our increased complexity is, itself, yielding further increases in 'allowing more ideas and companies to be funded' except in the sense where already-wealthy people gain more discretionary income which they may decide to spend on their pet projects. Futures have existed for much longer than derivative markets; are we helping farmers more when we allow futures to be traded more quickly?

But I disagree that the limit is funding—it's simply a lack of concerted interest. We accept that we should spend tax money on rewarding certain financial activities, and we create a system that disproportionately rewards people who facilitate these activities. But we might restructure things so people are incentivized to do research instead of financial engineering.

I think the fundamental idea is that things of value need to be extracted or manufactured at some point and we're not set up to reward people studying new extractive tools or new manufacturing processes when those people could instead work on finance products.


I think these are totally different things. HFT firms and Hedge Funds are not "allowing more ideas to be funded". Finance in general can indeed be good but I think its much harder to argue for the net benefit of firms like Jane Street or Citadel.

This is just a weird Trump talking point. This situation is unprecedented on many levels. The pentagon already had a signed contract with these stipulations and wanted to unilaterally renegotiate with Anthropic under threat of deeming them a foreign adversary and destroying their business if they didn't accept the DoD demands. It's totally absurd to turn this around on Anthropic and paint them as trying to determine US Military policy.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: