Hacker News .hnnew | past | comments | ask | show | jobs | submit | thijsr's commentslogin

I may be biased, but I do think it is a very fun talk

(disclosure: I am the speaker)


A digital euro is intended to the the digital equivalent of cash. It is issued directly by the central bank. Currently, consumers cannot have an account at the central bank. They have a balance at a commercial bank, and the commercial bank has an account at the central bank. Right now, you must have a private middleman to do any banking. The digital euro should offer a public alternative to that.

(but it probably won't ever happen, because banks are lobbying against it with FUD campaigns, they feel like it threatens their existence)

Wero is something completely different. It allows consumers to easily pay merchants, mostly online. The digital euro is not a payment network in the same sense as Visa, Mastercard, iDEAL and others.


Is it really not?

https://www.ecb.europa.eu/euro/digital_euro/faqs/html/ecb.fa...

It says they'll have offline transactions, if they have that, then you can probably make those "offline" transactions from Kms away from the receiver. We'll see how things evolve, I'm still not convinced that wero will have any use once the digital euro arrives.


Hi, author here. Thanks for posting this! I gave a talk yesterday at the 39th Chaos Communication Congress in Hamburg that goes into detail about how the vulnerability works [1]. Short summary, on affected CPUs, all of host physical memory can be read, despite commonly applied software mitigations. On Google Cloud, we were able to leak from all of the physical memory from other tenants as well, without having to interact with the victim virtual machine.

[1] https://media.ccc.de/v/39c3-spectre-in-the-real-world-leakin...


Disclosure: I used to work on GCE.

Nice write up and very clever work. I'm surprised by the AWS response that you linked to though (https://aws.amazon.com/blogs/security/ec2-defenses-against-l...).

While I was sure they'd note that Nitro doesn't have this vulnerability due to its design, it seems weird not to talk about Firecracker and Lambda and so on. Maybe those are always on Cascadelake+ hardware? (I also haven't followed this space for 5 years, so maybe I'm asking the wrong question)


We've only verified EC2 during our research, but you do make a good point here. Nitro wasn't vulnerable. Firecracker might have been, considering that it is also built on top of KVM. Firecracker was not specifically designed to also defend against hardware vulnerabilities [1], so I don't see an immediate reason why it wouldn't have worked.

We had to limit the scope of the project somewhere unfortunately, but it would have been nice to check Firecracker and Lambda as well.

[1] https://github.com/firecracker-microvm/firecracker/blob/main...


Thank you for the presentation. Great work!


> I also don't love enums for errors because it means adding any new error type will be a breaking change

You can annotate your error enum with #[non_exhaustive], then it will not be a breaking change if you add a new variant. Effectively, you enforce that anybody doing a match on the enum must implement the "default" case, i.e. that nothing matches.


Scientific papers have an abstract, which already serves as a short summary.


Have you read the abstract? Most of them is a teaser, and does not contain any information from the conclusion etc. So a summary is still valid?


This is a project that we've been working in collaboration with Google and AWS. We present a vulnerability that allows a malicious virtual machine to leak all physical memory of its host, including the memory of other virtual machines running on the system. L1TF Reloaded combines two long-known transient execution vulnerabilities, L1TF and (Half-)Spectre. By combining them, commonly deployed software-based mitigations against L1TF, such as L1d flushing and core scheduling, can be circumvented.

We've demonstrated our attack on real-world KVM-based cloud solutions. Both Google Cloud [1] and AWS [2] wrote a blog post in response to this attack, where they describe how they mitigate against L1TF Reloaded and how they harden their systems against unknown transient execution attacks. Google also decided to award us a bug bounty of $151,515, the highest bounty of their Cloud VRP yet.

PoC is available at https://github.com/ThijsRay/l1tf_reloaded

[1] this submission

[2] https://aws.amazon.com/blogs/security/ec2-defenses-against-l...


When you can modify the microcode of a CPU, you can modify the behaviour of the RDRAND/RDSEED instructions. For example, using EntrySign [1] on AMD, you can make RDRAND to always return 4 (chosen by a fair dice roll, guaranteed to be random)

[1] https://bughunters.google.com/blog/5424842357473280/zen-and-...


I don't mean to say that RDSEED is sufficient for security. But a "correctly implemented and properly secured" RDSEED is indeed, quantum random.

IE: While not "all" RDSEED implementations (ie: microcode vulnerabilities, virtual machine emulation, etc. etc.) are correct... it is possible to build a true RNG for cryptographic-level security with "correct" RDSEED implementations.

------

This is an important factoid because a lot of people still think you need geiger counters and/or crazy radio antenna to find sufficient sources of true entropy. Nope!! The easiest source of true quantum entropy is heat, and that's inside of every chip. A good implementation can tap into that heat and provide perfect randomness.

Just yeah: microcode vulnerabilities, VM vulnerabilities, etc. etc. There's a whole line of other stuff you also need to keep secure. But those are "Tractable" problems and within the skills of a typical IT Team / Programming team. The overall correct strategy is that... I guess "pn-junction shot noise" is a sufficient source of randomness. And that exists in every single transistor of your ~billion transistor chips/CPUs. You do need to build out the correct amplifiers to see this noise but that's called RDSEED in practice.


Rowhammer is an inherent problem to the way we design DRAM. It is a known problem to memory manufacturers that is very hard, if not impossible, to fix. In fact, Rowhammer only becomes worse as the memory density increases.


It’s a matter of percentages… not all manufacturers fell to the rowhammer attack.

The positive part of the original rowhammer report was that it gave us a new tool to validate memory (it caused failures much faster than other validation methods).


As far as I am aware, the course material is not public. Practical assignments are an integral part of the courses given by the VUSEC group, and unfortunately those are difficult to do remotely without the course infrastructure.

The Binary and Malware Analysis course that you mentioned builds on top of the book "Practical Binary Analysis" by Dennis Andriesse, so you could grab a copy of that if you are interested.


Ah yea, he gave a guest lecture on how he hacked a botnet!

More info here: https://krebsonsecurity.com/2014/06/operation-tovar-targets-...

it's been a while back :)


Thanks. I understand that it is difficult to do it remotely.

I do have the book! I bought it a while ago but did not have the pleasure to check it out.


Disabling SMT alone isn’t enough to mitigate CPU vulnerabilities. For full protection against issues like L1TF or MDS, you must both enable the relevant mitigations and disable SMT. Mitigations defend against attacks where an attacker executes on the same core after the victim, while disabling SMT protects against scenarios where the attacker runs concurrently with the victim.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: