Hacker News .hnnew | past | comments | ask | show | jobs | submit | mswphd's commentslogin

there is a strong anti-QKD bias among experts who understand QKD. It is fun academic concept, but does not solve a real world problem, and does not use techniques available at remotely comparable costs to classical cryptography in the real world, and even if you pay the enormous costs for it, it is trivial for an attacker to completely disrupt your communication in a way that cannot be recovered from (without out-of-band communication, e.g. either sending a courier, or using computational cryptography).

If you hate the NSA that's fine. Nobody in the EU cried foul over the NSA's recommendations though (and the NIST-winning schemes are European). Chinese scholars submitted some fundamentally similar schemes, the Chinese Academy of Sciences have formally recommended lattice-based schemes. While the Chinese (government-run) standardization is only starting, it is a very good bet that they will use a lattice-based scheme.

So, unless you think all of the world's governments (again, including China) are in a massive cabal to allow the NSA specifically to spy on the entire world, #2 is not a particularly valid question.


BB84 (and QKD overall) requires authenticated channels. You have to get those somewhere. You can get them from an information-theoretically secure MAC, but it has significant downsides. You can get them with computationally secure primitives, but then there's no point in using QKD in the first place. You cannot instantiate QKD securely without one of those two choices though.

> You can get them with computationally secure primitives, but then there's no point in using QKD in the first place.

I don’t entirely agree. You can build a computationally secure authenticated channel using symmetric primitives (e.g. hashes) that are very, very likely to survive for a long time. And you can build comparably secure asymmetric authentication schemes from the same primitives (hash-based signatures are a thing).

But to build a classical key exchange system, you need more exotic primitives (Diffie-Hellman or public-key encryption / KEM schemes), and the primitives of this sort that are supposedly post-quantum secure have not been studied for nearly as long and have much more structure that might make them attackable.

Not to mention that attacking the authenticated channel in QKD cannot give a store-now-decrypt-later attack.


At that point you can just pre-share a key and use AES.

Nope -- that gives neither public-key capabilities nor forward secrecy.

that's not what people say? and pre-quantum crypto is also vulnerable to yet discovered classical algorithms?

> that's not what people say?

Well, they should, if they want to be honest and mathematically rigorous.

For instance, in the case of NIST's proposed post quantum cryptography standard Kyber that relies on lattice based methods.

> and pre-quantum crypto is also vulnerable to yet discovered classical algorithms?

True, and also disconcerting; with the most reckless being allowing fungible currency reliant on such methods.

We should be working on standardising and moving towards methods that are independent of, rather than rely on, unresolved questions in mathematics.


you cannot be mathematically rigorous with computational lower bounds. It is not possible with current mathematics. No ones that are relevant to cryptography exist, so any computational cryptography must separate into

1. provable constructions based on "hard" problems, and 2. best-effort cryptanalysis of "hard" problems.

This is true of lattice-based problems. It is true of EC crypto. It is true of RSA. it is true of McCliece. It is true of AES. That is the nature of things, and there is no avoiding that.

Analysis of Kyber was honest and mathematically rigorous. It's also beside the point, as all of your criticisms hold for EC/AES as well (despite EC having some reasonable lower bounds, e.g. in (extensions of the) generic group model. These of course rely on the conjecture that EC groups are generic groups).

> We should be working on standardising and moving towards methods that are independent of, rather than rely on, unresolved questions in mathematics.

There are no known methods that are remotely economically viable. There is (completely seriously) a clearer path towards fixing climate change than what you say. There is also a clearer path towards fixing global hunger. It is a complete fantasy to want to solely rely on mathematically provable techniques in cryptography, and not one that it is worth engaging with.

Furthermore, it's completely pointless. We might as well frame your question as

> We cannot prove that AES is hard, so we should not use it.

Why? It would be cool to prove that AES is hard. Sounds fun. And practically, the hardness assumptions of deployed cryptography are almost never the cause of a security vulnerability. If we care about secure systems, proving AES is hard is so low down on the priority list that it is difficult to think of something less important. Again, completely seriously, we would have MUCH more secure systems if we paid each person in the country to use better passwords.

Given that this is the case, it seems unreasonable to suggest spending \Omega(billions) updating our network infrastructure to worse-performing links just to "fix" a problem that doesn't exist. I'm even speaking as a cryptographer who (unreasonably) dislikes heuristics in the field, and tries to replace them with provable alternatives. It is a fun academic exercise. But it is not a real world issue.


I wouldn't call it "solid theoretical footing". The rough sketch of QKD is

1. BB84 key exchange requires an authenticated channel. typically you do this with a 2. Carter-Wegman MAC, which is information-theoretically secure, but requires shared randomness that cannot be reused.

Successful protocol execution refreshes randomness (you can net gain from it), so you can communicate back and forth continuously when everything is working. An MiTM who simulates a network failure though can expend some of your pre-shared randomness (without it being refreshed). If they do this enough, they can exhaust your shared randomness, and bring down the link until you exchange more shared randomness somehow out of band. if you want to maintain information theoretic security, this might involve e.g. a courier with a USB or whatever (or a carrier pigeon, who knows).

This is still "secure", but is also a significant issue any QKD (even "real" QKD) has that classical cryptography does not have, and has always made me question the "solid" story for QKD.


the rust they've written (so far) is highly unidiomatic (and with a ton of unsafe). I can't speak to the zig part, but it seems plausible to me it is line-by-line, horrendous rust.

Whether or not they can clean it up is an interesting question.


zig can do some things wrt. compiler time compute which sits somewhere in between rust const expr and proc macro usage. This isn't something rust (or most languages) have. So even if we are generous and interpret line by line as expression by expression this isn't fully doable

but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.

Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.


If anyone can do it, it's Anthropic. The question is more how long it will take and how many tokens it will burn/how much groundwater.

the rust port (at least currently) heavily uses unsafe as well

https://github.com/oven-sh/bun/compare/claude/phase-a-port#d...

that isn't particularly surprising, but the point is I would expect getting things more stable than the zig version would take a bit.


That's completely normal at the first step of the language transformation. Actually it's required if you do a file by file transformation first while wanting to maintain interface compatibility.

I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.


yes, deriving all of the math cryptography depends on independently would not be easy. Fortunately, that's not really how anybody learns.

Along those lines, you do not need to understand the proof of Euler's totient theorem to understand cryptography. It is a distraction. All you need (at most) is to know that the result is true, and even then it's only fundamentally important for RSA, which you likely shouldn't bother learning about these days. RSA simultaneously

1. looks very simple (though the simple version is horrendously insecure), and 2. does not have particularly good performance, and 3. does not have particularly good security (either post or pre quantum), and 4. has been in the process of being phased out for quite some time now.

this is not a good combination of properties. The fact that cryptography textbooks cover it is mostly due to historical tradition. I would personally argue it is time to omit it from instruction materials.


> Along those lines, you do not need to understand the proof of Euler's totient theorem to understand cryptography.

Well, I had to when I learned cryptography, but I learned it from a class offered by the math department, so I guess that's rather unsurprising :).

> even then it's only fundamentally important for RSA […] this is not a good combination of properties

Strong agree here.


in general the math is not actually that hard. It will be things you don't know beforehand, but a general undergraduate cryptography class will not assume the undergraduates have that much of a better math background than you. Typically just

1. comfort with logical operations/arithmetic over F2 2. discrete probability over finite sets 3. some basic complexity theory (mostly to reason about running time, though being familiar with proofs by reduction can help as well if you actually want to do security proofs).

a decent idea might be to take some "good" undergraduate cryptography class's course resource and use that. For example, Mihir Bellare is an extremely accomplished cryptographer. The course materials for his undergrad course F2018 are

https://cseweb.ucsd.edu/~mihir/cse107/slides.html

He's also written a longer series of lecture notes on cryptography that's freely available. I don't know where it is on his webpage these days, but you can find it below

https://www.cs.tufts.edu/comp/165/papers/Goldwasser-Bellare-...

the difficult part with this approach is not being able to ask questions that easily. To "fix" this, you can either

* use AI, though that has its own issues, or * use some community forum, such as crypto.stackexchange.com

if you want a full book, the typical (undergradute) one that roughly matches the above syllabus is "An Introduction to Modern Cryptograph" by Katz and Lindell.

I've also heard good things about Mike Roseluk's the joy of cryptography

https://joyofcryptography.com/

Boneh and Shoup have a decent (freely available, and very comprehensive) textbook at the graduate level

https://toc.cryptobook.us/

but it is following (roughly) the standard undergraduate curriculum, so if the slides I linked too are too sparse at some point, you could look up that topic in Boneh and Shoup (or use Boneh and Shoup as context to ask an LLM more targeted questions).

That all being said, the main difficulty for someone in your position is likely determining "what to learn" in cryptography. The easy thing would be to follow the standard undergraduate track, but if you're interested in any particular topic there are likely better routes to take.


yeah I generally would say that learning about the actual schemes (tends to be) doable by a casual enthusiast, but learning about how the SOTA attacks work (which motivate scheme design for sure) is much more difficult.

didn't PS3 have a hardcoded nonce for their ECDSA impl that allowed full key recovery? I would agree that I doubt LLMs let people mount side-channel attacks easily on consumer electronics though.

Yes indeed, that chain of exploits was all software and not hardware. Developed after the Hotz exploit and Sony subsequently shuttering OtherOS.

It didn't directly give access to anything however. IIRC they heavily relied on other complex exploits they developed themselves, as well as relying on earlier exploits they could access by rolling back the firmware by indeed abusing the ECDSA implementation. At least, that turned out to be the path of least resistance. Without earlier exploits, there would be less known about the system to work with.

Their presentation [1] [2] is still a very interesting watch.

[1] https://www.youtube.com/watch?v=5E0DkoQjCmI

[2] https://fahrplan.events.ccc.de/congress/2010/Fahrplan/attach...


^-- ignore much of the IIRC above; I completely misremembered, I now notice after rewatching the talk.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: