Hacker News new | past | comments | ask | show | jobs | submit | ericpauley's comments login

GGP already showed the marginal power cost is well below $2.

There is so much more to lifecycle sustainment cost than that.

Rackspace. Networking. Physical safety. Physical security. Sales staff. Support staff. Legal. Finance. HR. Support staff for those folks.

That’s just off the top of my head. Sitting down for a couple days at the very least, like a business should, would likely reveal significant depths that $2 won’t cover.


These are all costs of any server hosting business. Other commenters have already shown that $2/hr for a racked 1U server at 400W is perfectly sustainable.

Just because you have all of those costs already doesn't make them go away. If you're cross-subsidising the H100 access with the rest of a profitable business, that's a choice you can make, but it doesn't mean it's suddenly profitable at $2: you still need the profitable rest of the business in order to lose money here.

So you terminate all of the above right now, or continue selling at a loss (which still extends the runway) and wait for better times? Also, do you know that similar situations occasionally occur in pretty much any market out there?

The market doesn't care how much you're losing, it will set a price and it's up to you to take it, or leave it.


But dragon does exist. This is like saying you’re stranded when you’re car camping in the woods.


Helicopters exist, but when one picks me up from my campsite in the woods, I'm probably being rescued.


Agreed; after 7 hopeful years this feels like a declaration of defeat for V8 isolate-based FaaS.


Most people probably have a PT account lying around or can find a friend with one. I just checked and my account from 6+ years ago is still active.


> It looks like the natural follow up of this article is to remove the LLM altogether.

Welcome to scientific papers in 2024…


Yes, but getting true randomness is not actually that hard. Many modern chips have true randomness generators, and other noisy system components can also be used. The RNGs used in operating systems are designed to take in arbitrary data to improve randomness.


Bania!


Completely agree that offensive research has (for better or for worse) become a mainstay at the major venues.

As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation. Unfortunately vendors are often too unskilled or obstinate to properly respond to disclosure from academics.

For their part academics have room to improve as well. Rather than the pendulum swinging back the other way, I anticipate that the majors will eventually have more involved expectations for reducing harm from disclosures, such as by expanding the scope of the “vendor” to other possible mitigating parties, like OS or Firewall vendors.


> As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation.

That assumes that without these disclosures we wouldn't see active exploits. I'm not sure i agree with that. I think bad actors are perfectly capable of finding exploits by themselves. I suspect the total number of active exploits (and especially targeted exploits) would be much higher without these disclosures.


Both can be true. It’s intellectually lazy to throw up our hands and say attacks would happen anyway instead of doing our best to mitigate harms.


I was going to respond in detail to this, but realized I'd be recapitulating an age-old debate about full- vs. "responsible-" disclosure, and it occurred to me that I haven't been in one of those debates in many years, because I think the issue is dead and buried.


Where's the source code?


The source code is proprietary, but it shouldn't take much work to replicate, fortunately (you just need to upload files at the right paths).


Like treat path as a object key, and put value as a json or blob?


Security researchers definitely do the naming gimmick for personal brand purposes. This may not be as obvious when it’s successful, but academic papers routinely name vulnerabilities when there is no real benefit to users.


The whole point of naming vulnerabilities is to establish a vernacular about them, so it's not surprising that academic papers name them. The literature about hardware microarchitectural attacks, for instance, would be fucking inscrutable (even more than it is now) without the names.


I'd be happy to file all of them under Spectre/MDS, except for the ones that aren't Spectre/MDS, of course. They don't all need unique names. Most of them are all instances of the same pattern: some value is not present in a register when it's needed, and an Intel CPU design continues to execute speculatively with the previous contents of that register instead of inserting a pipeline bubble, leaking the previous contents of that register. Using an inter-core communication buffer, instead of a load data buffer like the last person, I don't think deserves a new name and logo. A new write-up, yes.

Wikipedia puts them all under one page: https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...


I don't even understand the impulse to lose the names. Names aren't achievement awards. We already have Best Paper awards at the Big 4 and the Pwnies (for however seriously you take that). The names don't cost anybody anything, and they're occasionally helpful.

Name them all.

You see the same weird discussions about CVEs, and people wanting to squash CVEs down (or not issue them at all) because the research work is deemed insufficient to merit the recognition. As if recognition for work was ever even ostensibly what the CVE program was about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: