HN2new | past | comments | ask | show | jobs | submit | a2128's commentslogin

    You're not just using a tool — you're co-authoring the science.
This README is an absolute headache that is filled with AI writing, terminology that doesn't exist or is being used improperly, and unsound ideas. For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer. I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas

I don't know if this particular tool/approach is legit, but LLM ablation is definitely a thing: https://arxiv.org/abs/2512.13655

Doesn't look legit to me. You are talking about abliteration, which is real. But the OP linked tool is doing novel and very dumb ablation: zeroing out huge components of the network, or zeroing out isolated components in a way that indicates extreme ignorance of the basic math involved.

Compared to abliteration, none of the ablation approaches of this tool make even half a whit of sense if you understand even the most basic aspects of an e.g. Transformer LLM architecture, so my guess is this is BS.


The terminology comes from the post[0] which kicked off interest in orthogonalizing weights w.r.t. a refusal direction in the first place. That is, abliteration was not originally called abliteration, but refusal ablation.

Ultimately though, OP is just what you get if you take the idea of abliteration and tell an LLM to fix the core problems: that refusal isn't actually always exactly a rank-1 subspace, nor the same throughout the net, nor nicely isolated to one layer/module, that it damages capabilities, and so on.

The model looks at that list and applies typical AI one-off 'workarounds' to each problem in turn while hyping up the prompter, and you get this slop pile.

[0]: https://www.lesswrong.com/posts/refusal-in-llms-is-mediated-...


No offense, but a Lesswrong link is an immediate yellow flag, especially on the topic of AI. I can’t say if that article in particular is bad, but it is associating with a whole lot of abject nonsense written by people who get high on their own farts.

Regardless, it is the origin of abliteration. Other extremely similar things have been done before, but the popularized idea/name is from that.

"Getting high on your own supply" is exactly what I'd expect from those immersed in this new AI stuff.

Is that quote from the movie Scarface?

https://www.youtube.com/watch?v=U4XplzBpOiU # had to search for it right now, seems to be a movie-quote \o/


It's not just a headache, it's bad

> For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer.

That doesn't mean there couldn't be a "concept neuron" that is doing the vast majority of heavy lifting for content refusal, though.


Thats not what it means at all. It uses SVD[0] to map the subspace in which the refusal happens. Its all pretty standard stuff with some hype on top to make it an interesting read.

Its basically using a compression technique to figure out which logits are the relevant ones and then zeroing them.

[0] https://en.wikipedia.org/wiki/Singular_value_decomposition


You are also not quite correct, IMO. See my comment at https://hackernews.hn/item?id=47283197.

What you are talking about is abliteration. What OBLITERATUS seems to be claiming to do is much more dumb, i.e. just zeroing out huge components (e.g. embedding dimension ranges, feed-forward blocks; https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...) of the network as an "Ablation Study" to attempt to determine the semantics of these components.

However, all these methods are marked as "Novel", I.e., maybe just BS made up by the author. IMO I don't see how they can work based on how they are named, they are way too dumb and clunky. But proper abliteration like you mentioned can definitely work.


You got me there. I missed the wackier antics further down. Mea culpa.

So did I initially until I saw a few more things from others here.

Hmm, pliny is amazing - if you kept up with him on social media you’d maybe like him https://x.com/elder_plinius

I don't know. I scrolled through his recent Tweets and he's sharing things like this $900 snake oil device that "finds nearby microphones" and "sends out AI-generated cancellation signals" to make them unable to record your voice : https://x.com/aidaxbaradari/status/2028864606568067491

Try to think for a moment about how a device would "find nearby microphones" or how it would use an AI-generated signal to cancel out your voice at the microphone. This should be setting of BS alarms for anyone.

It seems the Twitter AI edgey poster guy is getting meta-trolled by another company selling fake AI devices


Ultrasound microphone jammers seem to be a real thing, so it's possible it does to some extent work.

Only for specific kinds, like MEMS.

But there's no way to detect microphones automatically, and "AI generated cancellation signals" is a word salad that doesn't mean anything.

What they probably mean is "we asked ChatGPT to tell us what waveform and frequency range to use on MEMS devices and spit out some arduino code."


The parent comment makes no reference to or comment on the author of the README.

It just says "the README sucks." Which, I'm inclined to agree, it does.

LLM-generated text has no place in prose -- it yields a negative investment balance between the author and aggregate readers.


> LLM-generated text has no place in prose

AI will infiltrate that too. I remember some time ago I read a book that was AI-generated. It took me a while to notice that it was AI-generated. One can notice certain patterns, where real humans would not write things the way AI does.


I see you have carefully avoided the em-dash. ;-)

Looking at his attempts at jailbreaking some models, I'm not sure he even remotely understands what he's doing, e.g. he tries to counter non-existent refusal training in Gemini [0] while doing nothing against the external guardrails which actually protect the model. Looks like a pompous e-celeb, all performance with no substance.

https://github.com/elder-plinius/L1B3RT4S/blob/main/GOOGLE.m...


jailbreaks are holistic, it’s not like you’re deprogramming / “countering” individual parts. Nobody creating jailbreaks “understand what they’re doing”

That's exactly what you do in case of refusal training, though. Yes, it will affect other "parts", but that's not the point. In this case the model itself doesn't even need a jailbreak.

>Nobody creating jailbreaks “understand what they’re doing”

Unless you mean those "god mode jailbreaker" e-celebrities showing off on Twitter/Reddit, that's simply not true.


As a non logged in user I get tweets in popularity order, which means this weird but tame sexual image comes up third https://x.com/elder_plinius/status/1904961097569890363?s=20

If this qualifies as "amazing" in 2026 then Karpathy and Gerganov must be halfway to godhood by now.

I dont think anyone is going to dispute this

I just don't think many people will be "amazed" by their output, as you claim.

I just said pliny was amazing, fwiw - i like that hes hacking on these and posts about it. I rushed to defend, i wish more people were taking old school anarchist cookbook approaches to these things

Smoke banana peel?

I had such a godawful headache from that. Also tried the peanut shells, equally awful. I was a dumb teenager.

gasoline and styrofoam was fun tho

Amazing as in his stuff actually works?

I just hear him promoting OBLITERATUS all day long and trying to get models to say naughty things


Yeah but i think the philosophy is to show how precarious the guardrails are

> I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas

Are there LLMs which don't always approve whatever idea the user has and tell him it's absolutely brilliant?


"Ablation studies" are a real thing in LLM development, but in this context it serves as a shibboleth by which members of the group of people who believe that models are "woke" can identify each other. In their discourse it serves a similar purpose to the phrase "gain of function" among COVID-19 cranks. It is borrowed from relevant technical jargon, but is used as a signal.

Positive keywords in this area of interest would be "point of view", "subtext", and "Art Linkletter".

I wouldn't call mainstream LLMs "woke," but they are definitely on the "politically correct" side of things. There should be NO restriction on open source models. They should just reflect the state of human knowledge and not take a stance on whether some activity is illegal or immoral.

Defining morality out of the set of knowledge is quite an opinion.

A model should understand multiple perspectives on morality and avoid prescribing a single one where there’s no overwhelming prior consensus.

Alternatively, they should be trained on my opinion on everything. That would also be acceptable.


If LLMs were a public good released by non profit entities, that could make sense, maybe. Turns out spewing illegal and immoral shit is not good for the PR of most for-profit businesses.

> "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?)

This is not what an ablation study is. An ablation study removes and/or swaps out ("ablates") different components of an architecture (be it a layer or set of layers, all activation functions, backbone, some fixed processing step, or any other component or set of components) and/or in some cases other aspects of training (perhaps a unique / different loss function, perhaps a specialized pre-training or fine-tuning step, etc) in order to attempt to better understand which component(s) of some novel approach is/are actually responsible for any observed improvements. It is a very broad research term of art.

That being said, the "Ablation Strategies" [1] the repo uses, and doing a Ctrl+F for "ablation" in the README does not fill me with confidence that the kind of ablation being done here is really achieving what the author claims. All the "ablation" techniques seem "Novel" in his table [2], i.e. they are unpublished / maybe not publicly or carefully tested, and could easily not work at all.

From later tables, I am not convinced I would want to use these ablations, as they ablate rather huge portions of the models, and so probably do result in massively broken models (as some commenters have noted in this thread elsewhere). EDIT: Also, in other cases [1], they ablate (zero out) architecture components in a way that just seems incredibly braindead if you have even a basic understanding of the linear algebra and dependencies between components of a transformer LLM. There is nothing sound clearly about this, in contrast to e.g. abliteration [3].

[1] hhtps://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-file#ablation-strategies

[2] https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...

EDIT: As another user mentions, "ablation" has a specific additional narrower meaning in some refusal analyses or when looking at making guardrails / changing response vectors and such. It is just a specific kind of ablation, and really should actually be called "abliteration", not "ablation" [3].

[3] https://huggingface.co/blog/mlabonne/abliteration, https://arxiv.org/abs/2512.13655.


What do you mean? It's a spin on abliteration / refusal ablation. Roughly, from what I remember abliteration is:

1. find a direction corresponding to refusal by analyzing activations at various parts of a model (iirc, via mass means seen earlier in Marks, Tegmark and shown to work well for similar tasks)

2. find the best part(s) of the model to orthogonalize w.r.t. that direction and do so (exhaustive search w/ some kind of benchmark)

OP is swapping in SVD for mass means (1), and the 'ablation study' for (2), and a bunch of extra LLM slop for... various reasons. The final model doesn't have zeroed chunks, that is search for which parts to orthogonalize/refusal ablate/abliterate. I don't have confidence that it works very well either, but, it isn't 'braindead' / obvious garbage in the way you're describing.

It's LLMified but standard abliteration. The idea has fundamental limitations and LLMs tend to work sideways at it -- there's not much progress to be made without rethinking it all -- but it's very conceptually and computationally simple and thus attractive to AIposters.

You can see how the LLMs all come up with the same repackaged ideas: SVD does something deeply similar to mass means (and yet isn't exactly equivalent, so LLM will _always_ suggest it), the various heuristic search strategies are competing against plain exhaustive search (which is... exhaustive already), and any time you work with tensors the LLM will suggest clipping/norms/smoothing of N flavors "just to be safe". And each of those ends up listed as "Novel" when it's just defensive null checks translated to pytorch.

I mean, the whole 'distributed search' thing is just because of how many combinations of individual AI slops need to be tested to actually run an eval on this. But the idea is sound! It's just terrible.

I'm not defending the project itself -- I think it's a mess of AIisms of negligible value -- but please at least condemn it w.r.t. what is actually wrong and not 'on vibes'.


wait, SVD / zeroing out the first principal component is an unsupervised technique. The earlier difference-of-means technique relies on the knowledge of which outputs are refusals and which aren’t. How would SVD be able to accomplish this without labels?

edit: the reference is https://arxiv.org/pdf/2512.18901

they are randomly sampling two sets of refusal/nonrefusal activation vectors, stacking them, and taking the elementwise difference between these two matrices. Then they use SVD to get the k top principal components. These are the directions they zero out.

Seems to me that the top principal component should be roughly equivalent to the difference-of-means vector, but wouldn’t the other PCs just capture the variance among the distributions of points sampled? I don’t understand why that’s desirable


[flagged]


It doesn’t even surprise me anymore. The people here think they’re so superior to the already arrogant redditors… same people.

Thing definitely exists… some top level comment somewhere telling about how it doesn’t exist.


Exactly. And I'm downvoted below 0 for pointing this out. :)

Alternately, it's intentional. It very effective filters out people with your mindset. You can decide if that's a good thing or not.

Why would a tool that works need to dissuade skeptics from trying it?

Based on his twitter he may just like irony/meta posting a little too much like a lot of modern culture

I immediately read it as intentional, as a sort of attempt at ironic / nihilistic humour re: LLM-generation, given what the tool claims to do.

You don't know what you are talking about. Obviously refusal circuitry does not live in one layer, but the repo is built on a paper with sound foundations from an Anthropic scholar working with a DeepMind interpretability mentor: https://scholar.google.com/citations?view_op=view_citation&h...

You are misrepresenting the situation. The debate isn't about whether they should go with another vendor or not. Everybody can agree that they would have the right to pick a different vendor. That's not what they're doing, they're instead trying to force Anthropic into doing what they want by applying a designation previously only reserved for Chinese companies like Huawei as punishment for taking their stance, with an unspoken agreement that if Anthropic backs down and allows full usage then the designation will be removed

The Pentagon does this kind of thing all the time. It's just usually not this official.

Completely false. It's the first time a US company has been designated a supply chain risk. Now the likes of Boeing can't use them. Health companies with Medicare/Tricare contracts don't know and will hold off until it's fully litigated.

This is not the government saying they're going with a different vendor, it's the government saying everyone has to choose to either have federal contracts or Claude, they can't have both.


So sure... and so wrong. I've done government contracting. If you angered the Pentagon enough they would simply blacklist you. You couldn't get a contract or be a part of someone else's contract.

The difference with Anthropic is it's open and above board instead of the customer telling a company "You can't sub with those guys or you won't get the contract."


Your response is myopic. Do you think large health insurers gave a shit about DoD unofficial contracting black lists or even if the DoD would even know who they're contracting with?

The impact of this is far more than just DoD procurement, which is already enormous.


Has there ever been any documented circumstance where significant inside information became public and known thanks to a trade? Most often, the trade is made at the last minute, and the information gets subsequently revealed anyway. And it's impossible to tell whether somebody is an inside trader, a wealthy gambling addict making a stupid decision, or hypothetically a foreign agent pretending to be an inside trader to make people believe in a particular outcome.

It's impossible to know anything for certain; almost everything is probabilistic.

Also I'm not sure how to interpret your criteria because timing matters, I don't think saying 'it gets revealed in the end' is very meaningful.

Anyway, on Polymarket specifically, sure, military strikes are a common one. Seems like a useful signal to go hide in the basement. Outside Polymarket, there were insider trades in 2008 that I'm sure were useful.


If you believe Polymarket as a serious source of truth, consider that somebody manipulated "Will Jesus Christ return before 2027?" because there was a secondary market on whether that market will rise above 5%. Which defeats the whole idea that the betting odds will reflect the truth. Also even pre-manipulation I don't think a 2% chance that Jesus will return was reflective of truth.

https://gizmodo.com/checking-in-on-polymarket-bets-on-christ...


I don't think Google considers such legislation to be their enemy. It would effectively kill F-Droid and other third-party app distribution methods, and would fully lock them in a place of high power over their platforms and pull the ladder up beneath them, and nobody would be able to blame Google for it. I mean, why would anybody submit their ID to a brand new unproven app store? Seems quite risky, better to just use Google Play


This is terrible for transparency and record keeping. X has also blocked internet archive access under similar concerns, but the end result was that now it's very difficult to tell who said what and when, posts can be deleted or edited, and no public figure can be held accountable for something wrong they said, or making contradictory statements over time, via a trustworthy archive.

You just have to rely on screenshots that may or may not have been fabricated, and maybe nobody's even captured a screenshot. If it's a public figure you normally trust, versus some random people's screenshots, of course you're gonna dismiss the screenshots as fake. It feels almost intentional to bring the platform into the dark ages.


1. Citation needed. Why would Google be secretly ingesting all of your Discord messages and be using it for... YouTube recommendations? Baader-Meinhof phenomenon is a more likely explanation

2. Already collecting a lot of data is not a reason to collect even more sensitive data. Plenty of people use Discord differently than you do. Anonymously participating in projects that use Discord and never saying anything personal over it, for example. This would possibly remove the ability to do so, for example if Discord's secretive AI decided that an LGBTQ+ project's Discord should be age restricted, and you would be forced to submit enough information to be fully identified and deanonymized, and now some foreign government could build a database that includes your full identity and your affiliation to such project


This is a scary argument. Should we also ban car emissions/safety testing, because Volvo's competitors might discern something from the results? Should we also stop FCC certification because competitors might glean information out of a device's radio characteristics?

The local residents, if not the public at large, should have a right to know. If not, then it should go both ways and grocery stores shouldn't be allowed to use tracking because my personal enemies might discern something from the milk brand I'm buying


What is always left unclear in these anti data center articles is how much the public is left in the dark? It’s not out of the normal for large developments to be kept under NDA until hitting a threshold of certainty, usually that does not mean the residents are left out of voicing their opinions before ground breaks.


Obviously data center bidders would prefer their activity to be kept in the dark, but does that make for good outcomes for anyone else except the bidders. First, the community would like to weigh in on whether they want a data center or not, often they don't. Then if they do, they'd rather have a bidding war than some NDA backroom deal with a single entity. All this does is serve Big Tech and Big Capital, and they don't need to run on easy mode, sponging off the small guy at this stage.


> the community would like to weigh in on whether they want a data center

This is the enabler of pure NIMBYism and we have to stop thinking this way. If a place wants this kind of land use and not that kind, then they need to write that down in a statute so everyone knows the rules. Making it all discretionary based on vibes is why Americans can't build anything.


I thought I made it clear, I'm not against data center build outs per se, a community might decide it's worth it to build one. If a community decides to go ahead with it, make it clear and open for the public to bid on it so the residents get the best deal available (e.g. reduced power bills, reduced property taxes, water usage limits, noise/light polution limits, whathaveyou...). These massive data centers are a new kind of business that most communities don't have much experience with, and I doubt they've had time to codify the rules. It sounds like the states are starting to add some more rules about transparency, which seems like a step in the right direction for making better deals for all involved.


The subtitle of the article tells us this is happening.

> Wisconsin has now joined several states with legislative proposals to make the process more transparent.

But it is a reactive measure. It has taken years for the impacts of these data centers to trickle down enough for citizens to understand what they are losing in the deal. Partially because so many of the deals were done under cover of NDAs. If anything, this gives NIMBYs more assurance that they are right to be skeptical of any development. The way these companies act will only increase NIMBYism.

> Making it all discretionary based on vibes is why Americans can't build anything.

Trusting large corporations to provide a full and accurate analysis of downside risks is also damaging.


> If a place wants this kind of land use and not that kind, then they need to write that down in a statute so everyone knows the rules.

Ironically this is a recipe for how you get nothing built. Zoning laws are much more potent than people showing up at city council meetings.


I feel like the term "community" is leading intuitions astray here. The actual decision at question here is whether the local government provides the necessary approvals for a company to build what they want on their private property.

It's good and proper for the government to consider the impacts on a local community before approving a big construction project. That process will need to involve some amount of open community consultation, and reasonable minds can differ on when and how that needs to start. The article describes a concrete proposal at the end, where NDAs would be allowed for the due diligence phase but not once the formal approval process begins; that seems fine.

It's not good and improper for the government to selectively withhold approval for politically disfavored industries, or to host a "bidding war" where anyone seeking approvals must out-bribe their competitors.


Its the same argument for high-density hog farming. If the use of private property may impinge on the neighbors, either through invasive noise, or costs to public utility infrastructure (power, water) then the community ought to have some insight and input, same as they have input into whether a high density hog farm can open right on the border of the community.

Yes some people see the datacenters as part of an ethical issue. I agree its not proper for permits to be withheld on purely ethical grounds, laws should be passed instead. But there are a lot of side-effects to having a datacenter near your property that are entirely concrete issues.


Why shouldn't permits be withheld on ethical grounds? Isn't that just giving permission for companies to be unethical and get away with it?


If a government wants to penalize companies for unethical behavior, they should pass a neutral and generally applicable law that provides for such penalties. Withholding permission to do random things based on ad hoc judgments of the company involved is a recipe for corruption.


Clearly there needs to be room for both things to occur. You should absolutely begin with passing laws, but to think that the laws on the books can cover every situation is naive. When companies skirt the law and cause harm, there needs to be a remedy.


I don't agree. The benefits of a business environment governed by due process and the rule of law far outweigh the benefits of individual government actors having arbitrary discretion to fill the gaps. As we've seen clearly on the federal level this past year, once you create that discretion, the common way for corporate executives to "prove" that they're nice and generous and deserve favorable treatment is not good behavior but open bribery of public officials.


Bribery is illegal. What hope do you have for due process and the rule of law when it is being carried out as it is now? You can't use an extraordinary case to justify your belief about the ordinary case.

Also, we don't live in a world adjudicated by machines, there will always be discretion and the potential for special favors. No matter how much you tie the hands of regulators there will be some actor who will have the power to extort. Not to mention that regulation is not opposed to due process and the rule of law, but is the most important component of both.

Imagining a world without discretion is imagining a world where corporations can do as much irreparable harm as they want as long as there isn't a law against it.


I agree with you. this should be handled by the legislative process. but we should also agree that secret deals announced as a fiat acompli are pretty fertile ground for corruption also


Right, and as I said I agree with that. But is there any reason to worry that communities aren't getting the input they're entitled to? The article mentions one case in the Madison suburbs, where "officials worked behind the scenes for months" and yet the residents were able to get the project cancelled when the NDA broke and they decided they didn't want it.


You make this sound like a conspiracy. This is normal practice in economic development, check off boxes until announcing to the public. The public rarely has much power in voicing their opinion but data centers are the current evil entity.


> data centers are the current evil entity.

There's a reason for that: they compete for resources but contribute relatively little back to the local economy. In that sense they're quite different from previous large corporate investments in a local area.


Again, I think it’s a muddy example. I have yet to see compelling data that on average data center are meaningfully raising rates and most of the rate increases are more due to the aging infrastructure in America that was neglected for too long.

If anything these should be examples on the failure of how these resources are being sold and good opportunity to build a better system.


What kind of say do the residents have when it’s nearly a done deal?

Unless the residents have a strong enough chance to veto, they’re just speaking into the void as far as the company is concerned.


Typically constituents don’t have any ability to veto. I imagine there are some cases in CA, thinking of that amusing article about an ice cream shop getting blocked by another ice cream shop.

It’s usually an indirect vote with your voice. To be frank, people don’t have that much of a role in what business gets built if it aligns with the states economic goals and zoning is not being critically changed.

I think the bigger discussion is if resources are going to be constrained can we make sure the use is being properly charged for resource buildout. It’s the same problem with building sports arenas or sweetheart tax deals for manufacturing plants, they often don’t pan out.


It’s definitely a result of the money at play, which is unprecedented in scale and (imo) speculation.

But this is, in theory, why we have laws: to fight power imbalances, and money is of course power.

Tough for me to be optimistic about law and order right now though, especially when it comes to the president’s biggest donors and the vice president’s handlers.


the building of the American Railroads were the largest capital endeavor in known history IIR. .. and Stanford was in the center of that, too


Ah my bad. But also, if we’re comparing buildout of infrastructure to the construction of the American Railroad system, especially in the context of lawbreaking and general immoral and unethical behavior…

Point kind of proven, yeah? One more argument for the “return to the gilded age” debates.

Edit: you’re speaking kind of authoritatively on the subject though. Care to share some figures? The AI bubble is definitely measured in trillions in 2026 USD. Was the railroad buildout trillions of dollars?


Depends on when you stop calculating, and how you exactly value the work

By 1900 the united states had 215 thousand miles of railroads https://www.loc.gov/classroom-materials/united-states-histor...

Depend on you value land mileage and work this could easily be north of 1T modern dollars.


Land value underneath railroad tracks is an interesting subject. Most land value is reasonably calculated by width * length, and maybe some airspace rights. And that makes sense to our human brains, because we can look at a parcel of land and acknowledge it might be worth $10^x for some x given inflation.

But railroads kind of fail with this because you might have a landowner who prices the edge of their parcel at $1,000,000,000,000 because they know you need that exact piece of land for your railroad, and if the railroad is super long you might run into 10 of these maniacs.

Meanwhile the vast majority of your line might be worth less than any adjacent farmland, square foot by square foot, especially if it’s rocky or unstable etc.

Having a continuous line of land for many miles also has its own intrinsic value, much more than owning any particular segment (especially as it allows you to build a railroad hah).

Anyway, suffice to say, I don’t think “land value underneath railroads from the 18th century” is something that’s easily estimated.


As a percentage of GDP investments in the railroad buildout in the US was comparable or slightly higher than AI-related investments. But they are on the same order of magnitude, which says a lot about the scale of AI.

> AI infrastructure has risen by $400 billion since 2022. A notable chunk of this spending has been focused on information processing equipment, which spiked at a 39% annualized rate in the first half of 2025. Harvard economist Jason Furman commented that investment in information processing equipment & software is equivalent to only 4% of US GDP, but was responsible for 92% of GDP growth in the first half of 2025. If you exclude these categories, the US economy grew at only a 0.1% annual rate in the first half.

https://www.cadtm.org/The-AI-bubble-and-the-US-economy?utm_s...


> Should we also ban car emissions/safety testing, because Volvo's competitors might discern something from the results? Should we also stop FCC certification because competitors might glean information out of a device's radio characteristics?

In the US neither of those are generally made public per se. They are made public when the thing actually passes testing or certification.


Naw - corps will just get engineers to fudge the emissions numbers, then they have someone low-level and easy to blame and remove from the organization... VW:

https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal


Don't give them any ideas


It was only through external review that the problems with the project were discovered, and the blog post was clearly written for marketing as it hardly shared any actual details about the result other than an unexplained video they called a screenshot. Good faith research would have pointed out the limitations of their system


You don't need to wait and see, Kimi K2 has the same hardware requirements and has several providers on OpenRouter:

https://openrouter.ai/moonshotai/kimi-k2-thinking https://openrouter.ai/moonshotai/kimi-k2-0905 https://openrouter.ai/moonshotai/kimi-k2-0905:exacto https://openrouter.ai/moonshotai/kimi-k2

Generally it seems to be in the neighborhood of $0.50/1M for input and $2.50/1M for output


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: