Hacker News new | past | comments | ask | show | jobs | submit login
Making sure crypto remains insecure [pdf] (cr.yp.to)
439 points by zorked on Oct 19, 2014 | hide | past | favorite | 171 comments



Bernstein on a "thought experiment", analyzing how government could have deliberately weakened cryptographic standards through various means. It's actually quite compelling, given that we know DES key size was weakened by the NSA back in the 70s, against IBM's will, and that we know Dual_EC_DRBG was backdoored also by the NSA a few years ago.

Bernstein's main point is that we need independent, peer-reviewed, cryptography that doesn't depend on government standardization. This talk is mostly a justification for his own work on ChaCha20/Poly1305 and DNSCurve, that were designed to be fast, constant-time and padding resistant. But the talk doesn't make mention of alternatives, except for Serpent that should have been a better cipher than Rijndael in the AES competition.

A good read. I would love to see broader adoption of ChaCha20 in OpenSSL and its standardization in TLS1.3. Google already supports it and it needs more attention from cryptographic before we start using it everywhere. It also seems that SHA-3 has failed to convince, and everyone is standardizing on SHA-2 instead, despite the computational cost. Maybe here too we need an alternative...


Could I give a mention to Threefish as well? It's in the same vein as ChaCha20, but it's a block cipher, based on modern ideas, is more capable than AES, and naive implementations are resistant to power/timing attacks by default. Yet, I rarely hear much talk about it. Half of the time I see it discussed, it's along the lines of "Hey, AES is looking really old, should I use Threefish in my new application/library?" to which people respond "AES is proven and still secure, so just keep using that." Personally, I find Threefish to be a beautiful algorithm; simple and effective just like ChaCha20.

Seems to me we've hit this local minima with various crypto algorithms. SHA2, AES, et. al are still considered secure, and almost no one has the guts to "risk" something new while the old stuff still works. Whereas before, we knew SHA1, DES, etc. were failing, so people were happy to jump to new algorithms as soon as they hit the market.

(Side note: I chose to use Threefish-512 in my hardware password manager project. It gets used in the disk and database encryption, so its built in tweaking is a godsend.)


Can you actually pose an argument in favor of Threefish? I have suspicions as to why Threefish, Twofish, Blowfish, &c come up in discussions like these, but rarely in discussions between actual cryptographic engineers.


I can tell you why I picked it, but I'm not a professional cryptographic engineer; just a hobbyist.

Compared to AES: Implementation and use is dead simple and short (less room for bugs). The algorithm is easy to grasp. Naive implementations are resistant to timing and power analysis. Built in support for Tweaking, so you don't need to mess around with weird modes of operation. Large key size, and no mis-match between key and block size. Efficient (~6 cpb). It was designed by a large group of well known and respected cryptographers.

The simplicity of Threefish is the most important, in my opinion. Applications of AES in the wild have been broken numerous times because of improper use of AES and its modes of operation. Threefish is less likely to suffer those issues, since Tweaking is built in. So, as of today, I think that benefit outweighs the possible risk of Threefish being found weak (and keep in mind that Threefish is specifically designed to resist analysis by using a large number of rounds. Breaks in reduced Threefish are expected, and are not likely to be breaks in full Threefish).

Additionally, if people (e.g. Google) are willing to use ChaCha20, I don't see why they wouldn't be willing to use Threefish. They're built on the same principles, as far as I can tell, and both have just as much backing (in terms of peer review and who designed them). Again, the only arguments I've heard in favor of AES involve fear of the unknown.


Sorry, I was unclear. I was comparing Threefish to ChaCha20. Yes, I can see reasons why someone would adopt Threefish over AES (AES predates mainstream tweakable cipher modes and, more importantly, was designed at a time when our understanding of microarchitecture side channels was crappy).

What I can't see is why someone would militate for Threefish over Salsa20, or one of the eSTREAM portfolio ciphers, or at least say "wait and see" on CAESAR. What process happened that landed on Threefish for you?

Incidentally: native support for tweakability is a good thing, but it doesn't mean you don't need support for "weird modes of operation". A tweakable cipher core is a primitive that you use with other tweakable block cipher modes.

I'm a little confused by your claim that Threefish got as much review as Salsa20; Salsa20 is an eSTREAM winner, implying that a pretty big team of some of the best cryptographers in the world tried hard to break it as a cipher, and Threefish is just an artifact of a SHA-3 contestant that wasn't selected.

Finally, 6 cpb isn't great, is it?


> What process happened that landed on Threefish for you?

I can't see why someone would advocate Threefish over ChaCha20 either. They serve different functions (one is a block cipher; the other a stream cipher). In my case, I needed a block cipher, hence my choice of Threefish.

I brought up Threefish not in conflict with the top parent's suggestion of using ChaCha20, but rather to suggest its place alongside ChaCha20, Poly1305, Curve25519, etc. in a modern crypto toolbox. In other words, I was saying "If someone is building an application that requires a block cipher, they should take a strong look at Threefish over AES; contrary to popular wisdom."

> A tweakable cipher core is a primitive that you use with other tweakable block cipher modes.

Indeed, but, for example, disk encryption requires less and simpler code using Threefish versus AES in XTS mode. Not to mention the caveats to XTS mode.

> I'm a little confused by your claim that Threefish got as much review as Salsa20

Off-hand, you're probably right that Salsa has more review than Threefish. But Threefish has more designers. And, Skein was one of the top SHA3 choices. Not that that conducts strength directly to Threefish, but it's a good sign. I mostly defer to Dmitry and Jon's comments here (http://crypto.stackexchange.com/questions/11725/has-threefis...).

> Finally, 6 cpb isn't great, is it?

Quick searching shows Salsa20 as ~3cpb, AES as 20-30cpb, and AES-NI as 3.5cpb. So 6 is great in my book, with Salsa20's 3 being amazing.


You almost never really want a block cipher. The majority of use cases for block ciphers --- in fact, the overwhelming majority of them --- essentially adapt the block cipher to perform stream encryption, often by going through the CTR step of literally transforming them into a stream cipher.


Totally OT here, but do you know why there seemed to be a big movement away from stream ciphers sometime the 90s? I remember at some point around 2000 reading an article saying that RC4 was suspect because it was a stream cipher, as block ciphers are considered more secure. Similarly TLS contained only a single stream cipher, and my understanding is that eSTREAM happened because NIST seemed uninterested in stream ciphers.


What were the viable software native stream ciphers in the 1990s? If you look at what Schneier thought to write up in Applied Cryptography, they were either hardware bit encryptors or amateur-hour "RNG + XOR" stuff.

RC4 was the first cipher I ever successfully worked with (in my defense, I was a teenager). I had a devilishly hard time debugging block crypto code, but once you had a working RC4 library, you could round-trip data through it trivially. I remember seeing RC4 get embedded in a lot of code for that same reason: if you had RC4, you were "done", but if you had a 3DES core, you'd still need to be crypto-literate enough to rig up some half-assed block cipher mode; even ECB requires some adaptation to encrypt arbitrary streams, which is what everyone wants to do.

There was a pretty significant amount of interest in reversing the RC4 algorithm (hence "arcfour"), and I think this is part of the reason. People wanted something that worked like a stream cipher, and didn't have better alternatives (unless they were themselves cryptographers).

Someone more acquainted with the literature might correct me, but my sense is: we didn't "move away" from stream ciphers, so much as we didn't have them at all, and gradually developed some. You see the same thing now with CAESAR and native AEAD ciphers.


ChaCha also got a lot of review in the form of BLAKE, whose security margin was roughly consistent with the existing cryptanalysis of ChaCha.

AES-128-CTR with AES-NI is ~0.8-1.2cpb on Haswell/Ivy Bridge/Bulldozer

Chacha20 is ~1.1cpb on AVX2, ~2.2cpb on AVX/XOP, ~2.8 on SSSE-3, and ~3.3cpb on SSE2.

ChaCha20 also performs much better on 32 bit systems, especially those without SIMD.


Turning your question on yourself, why do you think Threefish is not as good as Salsa20 or such? What about it makes you uneasy, and why?


Salsa20 is better reviewed, it's faster, it's significantly simpler (Salsa20 is basically just a keyed hash core running in counter mode), it's a native stream cipher --- which is virtually always what you actually want --- and it's not idiosyncratic.


A vigenere cipher with the Key "BRUCESCHNEIER" is in fact unbreakable.


... by the kinds of people who pick cryptographic primitives because they were authored by Bruce Schneier.

(ducking)


I would also like to see Curve25519 (as well as others) supported in all browsers. It really bothers me that while at least websites are making an effort to default to ECDHE encryption with PFS, they have to resort to using the low-performance, insecure and probably NSA-backdoored P-256 curve, because most browsers aren't supporting anything else right now.


You could cite sources on "probably NSA-backdoored" P-256, but they'd all boil down to "you can with some effort devise a scenario in which it would have been possible for NSA to influence the curve selection". You'd be in a very similar rhetorical position to the "controlled demolition" conspiracy theorists.

We should, of course, demand rigid curves, and very few of the mainstream curves are rigid. But there's a difference between that and the assertion that P-256 is "probably backdoored"; for one thing, the latter assertion encourages people to adopt RSA, which is more fraught than ECC.

That RSA vs. ECC issue is even worse in browsers, which I think makes your sentiment even less useful. In ground-up implementations, you can get some comfort from OAEP/PSS for RSA. No such help in TLS, where RSA still uses (terribly broken) PKCS1v15.


>"you can with some effort devise a scenario in which it would have been possible for NSA to influence the curve selection"

Let's try to be clear: The NSA (naming names, among them Jerry Solinas) worked directly with the Certicom engineers to choose the properties of the SECG curves (some of which became the NIST/X9.82/Suite B curves, there's a lot of parallel working/overlap). They helped to perform the computational search to find the seeds, which were manipulated so that the results had certain properties (some of which were concealed and then were misrepresented as "provably random"). That is not a scenario: that is actually what happened.

Given that, and how incredibly opaque the process was, why am I not screaming in horror running away from secp256r1 et al? Because the information I've found (except for the SIGINT Enabling Project's budget) all points to that manipulation and deception taking place to obscure properties of the curves which allowed patents for efficient implementation of the curves - patents the NSA had in return agreed to licence.

Also, they use it themselves, in their most critical systems (they are particularly careful about randomness and side-channels). Is that just a cover? (Are they really willing to sacrifice the IA mission for the SIGINT?) I have no evidence that it is. It probably isn't. It could be. (They also use Dual-EC_DRBG themselves, likely on the basis that since only they have the private key, "nobody but us" can break it - shame it's got a bias anyway!)

On balance, I think the NIST curves are probably OK to use for now. If secp256r1 was somehow "backdoored", I have absolutely no idea how, and on balance, I do not think that it is. I think it's more likely any negative manipulation was to deliberately standardise something that's fragile not because it's weak but because it's easy and naïve to make it weak: hard to do in constant-time, convince some people to use insecure blinding techniques, etc. (And yes, I agree RSA w/PKCSv1.5 is definitely not better - one may wonder if it's another example.)

So I'm not running away from P256. But I'd like to walk away from it and switch to something more rigid, safer and a bit faster soon - and hopefully we get that out of the CFRG process (to the grandparent comment: Curve25519 is one of the top contenders).

The slides are a bit of out context without the talk, by the way!


This is a much better comment than I could have written.


I'm going to display lots of ignorance in the hope you'll correct me. I thought these "possibly backdoored" curves needed a seed, and the creators provided a "provably random" one by hashing... a random number they provided. That alone is so silly and I'm not sure how anyone could take it seriously. But then, after that, it was discovered all those "provably random" seeds had a 1-in-a-million weakness. While this weakness alone doesn't comprise everything, it's an indicator that someone intentionally found those seeds so it might indicate something more.

Is my poor understanding terribly wrong?


Yes. The P- curves start from a single random number, and then go through a number of reasonable steps to expand the number into a set of curve parameters. It's not easy, given the steps involved, to see how that random seed could easily be used to select a weak curve, but it's possible. At this point, nobody OK with "possible".

The NIST curves suck. But if your choice is between the NIST P- curves and PKCS1v15 RSA, it's not hard to see what the better of the two options is.


So the fact that NIST used a "verifiably random" seed that was definitely not random, and later shown to not be rigid isn't a practical cause for concern? Wouldn't it be prudent to assume that these unexplained seeds might have some more advanced attack, assuming the NSA is a decade or two beyond public research?


The seed itself was never claimed to be verifiably random. Instead, because it uses a (cryptographically secure) pseudo random generator to produce the curve parameters out of a public seed, the curve parameters are said to be "verifiably random". Today, we have more strict requirements on curve rigidity in mind than at the time they released the curves. It's not really fair to claim that NIST acted in malice by not following best practices which only developed after its publication.


How does hashing prevent anything? It just makes it a bit harder, as you've got to try a bunch of inputs and look for a desired output. Big deal, especially if they have plenty of time before publishing. It's not like it is hard at all to come up with a really truly random seed to use, one that would not allow such speculation.


If you are looking for one specific output, this bunch of inputs is as large as 2^159 on average before you find the preimage (which is far too large to bruteforce with the meager earthly energy resources). The current fear regarding this curves is that the NSA might know how to break a large fraction of all curves, like every millionth curve. In that case, trying out seeds until you get one of those rare curves is feasible, but a truly random curve would be safe with 99.9999% probability. As we have don't trust anybody to generate these truly random curves, we a stuck with rigidity requirements as our best shot.


Which hash function was used?


Apparently Sha1: http://csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReC... - Appendix 4-7

Which is fine, given that even today Sha1's preimage resistance is not practically broken.


Would you support an effort to get Ed25519/Curve25519 incorporated into NSS and OpenSSL? :)


What does that even mean? Are Ed25519 and Curve25519 superior to ECDSA and NIST P-256? Of course. What would it mean to "support" or "not support" it?


Support an effort -- Volunteer time/resources to make it happen.


It's not about the code. You need to get Mozilla and Google to agree to deploy. They aren't dragging their feet; they put an enormous amount of effort into these problems, and employ some of the best in the industry.


Okay. I wish there was more visibility on these matters. I'm still, by and large, an outsider to all the going ons of the industry. :)


That's not fair either. Most of this stuff is happening in the open; you just have to put the effort in to joining the mailing lists, reading the bug trackers, &c.


Thomas, you basically just told me, "That's not fair either. Most of this stuff is happening in the open; you just have to know the obscure places to look."

Visibility isn't a matter of "can the public see it?" I also mean "is it in a place the public is likely to look?"


Nobody is hiding it. People who are competent in the subject matter can find it easily if they want to. I think, respectfully, that you might just not want to be wrong about this. :)


I was moreso trying to express a desire for better PR than criticizing the transparency of their efforts. I'll hunt down these elusive mailing lists and bug trackers and see if I can cobble together a TL;DR portal :)

Sorry if there was a miscommunication ;)


You have to say "I think OpenSSL should support Ed25519" and then you support it. Until you say the magic words, you don't support it.


That's not at all what I meant by "support".


My understanding of the issues with P-256 is that it's very easy for mistakes in the ECC implementation around certain points to leak information whereas with other curves it is not. Not quite "backdoored" but certainly not optimal in terms of security.

Edit: based on http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4...


It's true that P-256 is more susceptible to implementation flaws than other curves, because P-256 wasn't designed to be misuse-resistant, unlike recent curves where that was a design goal.

It's unlikely that any credible new protocol is going to adopt a NIST curve. They exist only for legacy support now. BULLRUN killed them.

Fortunately, in this case I think hysteria actually served a good purpose. I do not think highly of crypto standards, and these ones in particular were dragging the industry down.


If anyone has the time and skills to implement these curves in NSS, I'm sure that the Firefox and Chrome security teams would be more than happy to enable them!


https://bugzilla.mozilla.org/show_bug.cgi?id=957105

Looks like there's already a bug in Mozilla's tracker.


Very smart crypto implementors are already working on this codebase, aren't they?


Not being one of them myself, I can only notice that their time is limited, and many much wanted improvements sit on the backburner waiting for a smart crypto implementor to work on them.

NSS and OpenSSL are difficult codebases to work with, which may also explain why it's so hard to find people to work on them.


Why do you expect Firefox and Chrome to take patches as opposed to saying that patches should wait for specs from the TLS WG which waits for the CFRG to bless something?

The political problem is that unless there's an authority whose blessing can be appealed to to justify inclusion, how do you decline curves from Microsoft Research and so-called NUMS curves? (Of course, it's not like the CFRG is a politically spoyless authority...)


I'm not as worried about Chrome/Mozilla as I am about IE and Safari (including the mobile one).


I agree. Through PECL, PHP is soon going to have curve25519 support http://pecl.php.net/package/libsodium

There's already a Node-Sodium package... https://github.com/paixaop/node-sodium

Why can't browsers support it natively? (e.g. through NSS)


Are there actually many alternatives at this time? SHA-3, and more recently CAESAR, still received submissions based on AES. Grøstl (one of the SHA-3 submissions borrowing from AES) was a top 5 candidate. Constant time, side-channel free algorithms don't seem to be a big issue for many people yet.


I don't think this is a fair assessment. There was a lot of concern over implementation issues during SHA-3 and every finalist did have constant-time implementations, usually coming from the design team itself.

The thing with AES and its derivatives is that they make it very easy to prove resistance against cryptanalysis (differential and linear); ARX designs are very hard to analyze exaustively, and bit-oriented designs tend to be slow. Using AES-like constructions provides a middle ground of reasonable performance and provable security (for some value of security).


Grøstl's round 3 specification document mentions 3 'strategies' for constant time implementations: AES-NI, vperm (AVX/XOP/NEON), or bitsliced (which they estimate "only a 50% overhead" for vs tables). Yet almost all of the implementations they provide that are not AES-NI are either table based or horrifically slow. ARM/NEON is the only non-AES-NI platform with a constant time implementation that is sometimes on par with the table based alternative.

Their constant time approach is "assume use of hardware AES instructions, otherwise enjoy a speed hit if you want to be safe".


I'm not sure what's your point. Your claim was that constant-time was not a big issue for many people. Maybe it's not. But the Groestl people wrote an entire report on implementation strategies for it [1], specifically mentioning cache-timing issues on the table-based approaches. They clearly took it seriously.

There are many competing requirements when designing a primitive. Groestl's choices were not necessarily wrong, even when treating side-channel attacks as a "big issue". I don't like Groestl, but I get what they were trying to achieve; Keccak did it much better, though.

[1] http://www.groestl.info/groestl-implementation-guide.pdf


Advocating table based implementations that are not secure is not taking it seriously. Providing implementations that are not secure is not taking it seriously. They may have taken it more seriously than had they designed it in 2001, but it clearly wasn't a major issue for them.

I will admit they are less at fault than the CAESAR candidates who insist on re-using AES.


He's right about the low entropy of many crypto random number generators. There are so many cases of this that it can't be by accident.

2013: Android random number flaw exploited for Bitcoin theft.

http://arstechnica.com/security/2013/08/google-confirms-crit...

Backdoor in RSA?

http://arstechnica.com/security/2014/01/how-the-nsa-may-have...

Linus Torvalds insists on accepting Intel patch to use their hardware random number generator directly instead of just using it as one of multiple sources being XORed together.

http://cryptome.org/2013/07/intel-bed-nsa.htm

Then there are "features" in SSL/TLS which make little sense, such as the ability to change the cryptosystem during the connection. This includes changing it to "None". That's been exploited.

http://www.praetorian.com/blog/man-in-the-middle-tls-ssl-pro...


Regarding Android: not only could it be an accident, it almost certainly is an accident. For instance: the Android RNG flaw didn't stem from a single coherent design flaw, but rather from the meta-flaw of relying on application-layer (or in this case runtime-layer) randomness, and not nailing the order of initialization in all the components. It's easy to see why this would have happened by accident: (a) the belief that runtime-layer randomness is better, more performant, more reliable on mobile platform and (b) the lack of understanding that a single central kernel-level RNG is superior to app-layer RNGs (which is a relatively modern idea).

Regarding RSA: that's not a backdoor in RSA. That's a supposed backdoor in BSAFE, a library sold by RSA. The story pertains to BSAFE's support for the Dual-EC PKRNG. (Of the supposed backdoors propagated by the NSA, by the way, this is probably the most credible).

I'm not sure what your allusion to Torvalds was meant to indicate.

The SSL/TLS downgrade attack you linked to didn't have anything to do with the "None" ciphersuite, or re-negotiation.


I'm not sure what your allusion to Torvalds was meant to indicate.

This. Right here is where Linus Torvalds sold out:

https://lkml.org/lkml/2012/7/5/425

FreeBSD didn't:

http://arstechnica.com/security/2013/12/we-cannot-trust-inte...


Read carefully what Linus actually wrote, not what you believe he writes. Here he suggests an even better implementation than the one actually implemented afterwards: feeding hwrng to the pool just like the all other sources. The last time I've checked (a year or a two ago) the hwrng was there but had the special treatment, not passing the same path as the other sources.

So it's more "why wasn't hwrng treated as the other sources later" than why he wrote what he wrote here.


I think Bernstein agrees with this, for what it's worth. I could be remembering wrong.


IIRC, Bernstein actually disagrees with (almost) any use of rdrand. Consider, as a toy example, an rdrand that poisons a register, so that xor'ing it with anything results in a known value; this might be worse than useless.

(http://blog.cr.yp.to/20140205-entropy.html: "imagine RDRAND microcode that's looking at the randomness pool that it's about to be hashed into")

I'm not sure I agree with Bernstein here.


I also don't see why we should assume that Intel would implement RDRAND as looking at the randomness pool and at the same time believe that they can't do even more funny things when they control the whole CPU. If the active maliciousness is assumed then the CPU can't be used at all.


I feel like I remember him saying something about how as long as it's just another thing hashed into the randomness pool, it's not that big of a deal. But I haven't seen that in writing anywhere. I could just be wrong.


I cannot find anything like that for "rdrand site:cr.yp.to"; it is a reasonable position, but I don't think it's Bernstein's.


No, I mean, this is something I thought I heard him say in person.


I.e. feeding hwrng into the pool, the one that says in the comment at the top "this is not secure against malicious intent"? That one? Actual quote:

   Randomness from these sources are
   added to an "entropy pool", which is mixed using a CRC-like function.
   **This is not cryptographically strong**, but it is adequate assuming
   the randomness is not chosen maliciously
drivers/char/random.c, 2.4.22, lines 69-72, emphasis added.


Thanks. That can explain why they still don't mix hwrng directly to the pool. They don't want to increase the "estimate" of the "random bits available" in that pool.


Alright, now I'm confused. Your previous post, the one I replied to, has this:

> the one actually implemented afterwards: feeding hwrng to the pool just like the all other sources.

And now you're saying it's not fed "just like all the other sources".

This is contradictory.


Few posts above, I write "read carefully what Linus actually wrote." This time I can only say, "read what I actually wrote" don't take a few words from the sentence. My assumption in this discussion was that others know from other sources more about the subject. I have the impression that that background is missing in your reply. Still, even if we remain by what I've written:

In my older post I claim that I've read the random.c source cca 1-2 years ago. Note: the source, the C code, not the comments. My older sentence was: "Here he suggests an even better implementation than the one actually implemented afterwards: feeding hwrng to the pool just like the all other sources. The last time I've checked (a year or a two ago) the hwrng was there but had the special treatment, not passing the same path as the other sources."

I think it's clear: "The last time I've checked (a year or a two ago) the hwrng (..) had the special treatment, not passing the same path as the other sources."

Then in the later post my words are: "they still don't mix hwrng directly to the pool." I wrote it as I've actually checked the current git master of the random.c.

I don't see anything contradictory.

I also have more understanding for the idea behind the current implementation. Now it looks to me like they try to handle the hwrng as something that can be actively malicious, not just having a known seed (while just having the known seed, to my understanding, shouldn't be a problem to mix together with with other inputs). I personally think that the assumption of active maliciousness is to much, because the same company produces the whole CPU, and if you assume that, then you shouldn't use the CPU at all: they can be actively malicious all the time, not just with one instruction.

And, by the way, mixing through the same path but without increasing the "entropy counter" were maybe possible, but maybe it was cleaner this way.


Note that Torvalds' father actually testified before the EU parliament about his son saying he was approached by governments to compromise linux (which happened at linuxconf). People brushed it off as a joke, but that seems really disingenuous. His dad believed him enough to make official testimony before the EU parliament.


By this logic NSA also is responsable for all buffer overflow attacks as well.

At the end of the day, there are just things that are easy to get wrong, it's not all a big conspiracy.


This is a fun slide deck, but if you'll forgive me for sucking some of the mystique out of it: it's just a reframing of DJB's hobby horses:

* The group that standardized AES rejected cache timing as a viable attack vector: http://cr.yp.to/antiforgery/cachetiming-20050414.pdf --- more generally, that constant-time algorithms and constructions (a feature of virtually all of Bernstein's work for the last 15+ years) aren't taken seriously in industry. Also helpful to know: there's a defensible argument that Bernstein more or less started AES cache timing research.

* Side-channel attacks weren't taken seriously by TLS, and Bernstein is affiliated with one of the research groups that found a TLS side-channel attack: http://www.isg.rhul.ac.uk/tls/Lucky13.html

* Application-layer randomness is a bad idea, and, like Nacl does, everyone should just use a single, carefully audit kernel RNG: http://blog.cr.yp.to/20140205-entropy.html

* Protocols and constructions should be designed to minimize dependence on randomness, the way DJB's EdDSA does: http://ed25519.cr.yp.to/ed25519-20110926.pdf

* Crypto performance is both not taken seriously as a research goal and an excuse for the deployment of terrible cryptography. This isn't so much a hobby horse of DJB's as it is his entire research career: http://cr.yp.to/cv/research-net-20070115.pdf

* DNSSEC, with its core design goals of "sign-only", "sign offline", and "sign from the root down" is a terrible idea. A sane design would look more like DNSCurve: http://dnscurve.org/ (helps also to know that DJB has a longstanding feud with both the design team for BIND, the flagship DNSSEC implementation, and with Namedroppers, the IETF DNS standardization list).

Unsurprisingly, considering the source, these are all really great important ideas. Bernstein is one of my heroes, and I'm certainly not trying to take him down a peg here. I just thought it might be interesting for people to know that this deck is less a revelation about cryptography than it is a survey of DJB's research over the last 15 years.

I'm surprised he didn't take more time on RC4, which he was closely involved with breaking. The story about how something as dazzlingly broken as RC4 could have gotten so entrenched in the industry is much more interesting than the story about how AES was standardized despite its performance relying so much on table lookups.


"Protocols and constructions should be designed to minimize dependence on randomness, the way DJB's EdDSA does:"

I think a lot of the reason that ECDSA/DSA are so inelegant was that they were designed to get around the patent on the (quite elegant) Schnorr signature algorithm, which EdDSA is based on. Thankfully, the patent expired in 2008, so moving to better signature algorithms that don't rely on per-message randomness is a matter of institutional inertia at this point.


Hmmm… and ElGamal's signing scheme didn't qualify? I don't think that's a reason, or even an excuse: Schnorr actually did claim his patent read on DSA. (That obviously never went anywhere, and the US Government, in a show of force via the NSA, filed their own patent on DSA.)

The NSA also really doesn't seem to have had much of a problem negotiating the licensing of patents with industry participants in the past! It is, after all, in an extraordinarily strong bargaining position: an awful lot of public money to throw around (openly, via on-the-book government contracts, and otherwise); the backing of a 'superpower' government and its negotiators; a remarkably good idea of what the position of those its bargaining with is (it is a signals intelligence agency…!); and is even able to classify patents and other nonsense like that.

DSA was sceptically received at the time, and there was widespread suspicion about its complexity and critical dependency on random k. (Even just two or three predictable leading bits of k can cause disaster!) Sizes above 1024 bit were not specified for way too long, and I would consider a 1024-bit DSA key most definitely crackable by the computing resources available to the NSA (or perhaps anyone else with a budget and a chip fab on contract after-hours!) today. (Those with DSA PGP signing keys may wish to take note and replace them, I suggest, with 4096-bit RSA keys.)

I think perhaps the only reason DSA got any popularity was because of the RSA patent, which it wasn't subject to, and the US cryptographic export restrictions which were the style at the time (remember the PGP source code "type-in listing" book publication?), which DSA avoided ostensibly because you couldn't encrypt with it but mainly because the US Government wanted you to use it. (ElGamal's encryption scheme was commonly used in tandem with it in practice, for example by some PGP versions.)

ECDSA has been accepted in a more sanguine fashion overall, although I'm not sure why that has been. Just like DSA, it definitely works correctly if it's done properly - but one tiny slip and you're toast.

Why was ECDSA better-received than DSA? Time may have helped; so is what they were seen to have done in the meantime. "Suite B-eatification" (as Kevin put it!) I think might've made people a little less sceptical of the NSA's intentions overall; also SHA (which as far as we know is not backdoored and has never been), and (via NIST) their arms-length involvement in reviewing candidates for the AES process. That introduced its "Information Assurance" mission (the one to protect classified US Government data - anyone else's data, or unclassified US Government data, is Not Their Problem). With Suite B, they wanted to be able to use more off-the-peg commercial providers for that purpose (rather than their bespoke, if a bit haute couture weird and distinctly old-looking, in-house stuff, which they referred to by contrast as "Suite A"). And people - perhaps foolishly - trusted that, despite the "equity" tension between their two missions, they would never risk compromising the same encryption systems they'd use for top secret US Government data. (It appears that trust may have been sadly misplaced… and, thanks to Snowden's bold willingness to go on-the-record about their unethical actions, they have permanently destroyed that trust now.)

We can, indeed, do better now, happily. I wonder if we will?

This talk was a fun "thinking-like-the-enemy" thought experiment, interlaced with the kinds of things djb's been generally warning about for years - I think it's up on YouTube somewhere, and like many slides makes more sense with the talk, although I sadly can't find it now (and I recall the audio quality wasn't great).

Along similar lines, someone down below has helpfully pointed out the fantastic "Operation ORCHESTRA" briefing PHK gave at FOSDEM 2014, which is equally funny and pointed. That video is on YouTube: https://youtu.be/fwcl17Q0bpk - http://phk.freebsd.dk/_downloads/FOSDEM_2014.pdf


I'm only 10 minutes into the first youtube vid... holy shit this is the most terrifying thing I've ever seen.

No wonder our patent reform is so broken... literally the government is using it to it's advantage. Screw the cloak and daggers shit when we can just hire some lawyers to bury people and everyone acts like it's common place... wow. WOW WOW WOW.

Ty for this.


In case you weren't in on the memo, it isn't a real NSA briefing. :)

But it could be. Any smart attacker with a budget could work like that…


I found this particular talk to be different that the usual DJB. It felt more like a self-reflection on why, despite being technically correct, his cryptographic solutions always fail to gain mainstream traction.

I have a lot of respect for DJB and for the way he always challenges the status-quo. This talk is an excellent summary of the current issues in the crypto industry. But he ignores the fact that we use AES, SHA and RC4 because _it's easy to use them_. And his solutions are never easy to use.

Make Nacl dead simple to use. Write clients and servers that use it. Integrate it into Firefox, Chrome, Dovecot, Nginx, Postfix, cUrl, etc... and people will slowly move away from TLS and its broken crypto.


Have you seen libsodium [1]? It's a "portable, cross-compilable, installable, packageable fork of NaCl, with a compatible API, and an extended API to improve usability even further." I've been using it in a toy project of mine and so far I'm very impressed!

1. https://github.com/jedisct1/libsodium


In practice, people are afraid to use libsodium because it was significantly worked on by people other than DJB, and they weren't sure whether it was vetted by DJB. I've personally seen this as an actual reason for avoiding it. And the concern doesn't seem entirely misplaced.


But why? Would you be afraid to use libreSSL, because the AES implementation wasn't done by Rijmen and Daemon (inventors of AES)? Generally one feature of the Bernstein designs are that they are reasonable simple to implement securely for a competent programmer. By comparison, RSA and the likes are frigging hard to get right because of side channel attacks [1].

[1] http://www.tau.ac.il/~tromer/acoustic/


No, you're missing something. Nacl isn't secure merely because it implements trustworthy algorithms that its authors themselves design. It's secure because it was designed from the ground up to be secure at an implementation level, and (critically) to be misuse-resistant. Equally importantly, the implementation was done by people with a pedigree in building secure misuse-resistant crypto.

Sodium (which I like and recommend) can't make the same claim. It gets what trustworthiness it has from the fact that it's a slavish fork of Nacl that merely aims to make it compatible with more runtimes.


> It's secure because it was designed from the ground up to be secure at an implementation level, and (critically) to be misuse-resistant.

Ok, interesting. Do you happen to have an example at hand? I don't see where there is room left for much sophistication in the implementation (regarding misuse-resistance), between the well defined algorithms and the strict API that Sodium shares with NaCl. For example lets look at crypto_box (public-key authenticated encryption):

On the crypto side it uses Curve25519 (alias X25519). Its paper has a pretty clear definition how to use the public key bytes as an x-coordinate. Because of the Montgomery multiplication an implementation does not have to calculate the y-coordinate, this eliminates one potential error source. Because of (1) its twist security and because (2) any input in a small subgroup results in a zero element output (neutral element or the (0,0) point), an implementation does not have to check the public key for its validity (another error source eliminated). All these are benefits any implementation will have.

On the interface side crypto_box is as locked down as a nonce based design can be. You enter: (•) Any 32 byte public key value (X25519 makes sure no harm can be done with malicious values), (•) your 32 bytes private key (not malleable by the attacker), (•) the nonce (during decryption any input is ok, because a wrong input will just generate an authentication error; during encryption there is the danger of nonce reuse, which is also present in NaCl) and (•) the plaintext / crypttext (again, because of the mandatory authentication any malicious ciphertext input will be rejected by the library).

So I don't see how a faithful implementation of the interface and algorithm specifications can get much wrong regarding misuse-resistance. Regarding it being secure because it was designed from the ground up to be secure at an implementation level, that is something I expect from any cryptography library. After all, is there any more basic requirement than that? (Well, you could argue being interoperable with other implementations is the most fundamental requirement, but I won't accept such low expectations when I'm asking for a competent developer.)

> It gets what trustworthiness it has from the fact that it's a slavish fork of Nacl that merely aims to make it compatible with more runtimes.

I fully agree that its inheritance from NaCl is Sodiums strongest selling point and its the argument I should have been using instead. I only had it dimly in the back of my heat at that time, though, and wasn't sure enough about it to claim it as a fact. Being less lazy to look it up would have saved me this follow up post ;) (Though I am genuinely interested in my question above.)


I feel bad that I don't have a good response to this comment right when the thread got usefully technical (but maybe 'pbsd will jump in and correct me on something). The fact is that I like libsodium and do not actually believe that the port itself introduced problems --- although I'd be a little concerned about any additional crypto functionality they ever introduced.

I'm just saying: I wouldn't trust libressl if Vincent Rijmen wrote it. :)


I only arrived late here; the thread is already too large to do any proper pedantry.

I like libsodium. It demonstrates that usable packaging is important, something that DJB seems to either ignore or not understand. But I only endorse libsodium as long as you stick to the NaCl API and not the stuff it's been slowly piling on, which may or may not be OK. What it certainly does is make choosing which function to use a lot harder: do I use crypto_secretbox or crypto_aead? crypto_hash, crypto_generichash, or crypto_shorthash? crypto_auth or crypto_generichash?


One response: timing attacks. DJB is one of the most knowledgeable researchers on side-channel attacks, and he leverages this knowledge when both designing _and_ implementing his algorithms.


True, but that is a common (though difficult) requirement of well written crypto libraries and not directly related to misuse-resistance.


Too bad. But thanks for your reply anyway :)


As well as being the author of the algorithms in NaCl, djb is also a specialist in secure software development, as tptacek mentioned.

You can be reasonably sure that his code is bug free, but they went the extra step of proving the absence of certain types of bugs using static analysis tools. One of the project's deliverables mentioned they used Frama-C to prove the absence of integer overflows, for example. (I will update this comment when I find the specific report.)

Other projects take the traditional approach of trusting that the code looks right until someone proves otherwise.

Edit: The report was probably one of these, which I can't access anymore:

http://www.cace-project.eu/downloads/deliverables-y1/CACE_D5... http://www.cace-project.eu/downloads/deliverables-y1/CACE_M5... http://www.cace-project.eu/downloads/deliverables-y3/31_CACE...


All the deliverables are accessible here [1], and from the most relevant one (D5.4) the short story seems to be:

- The core library's memory safety was verified using VCC; - A Frama-C plugin was used to verify (a limited form of) side-channel resistance.

Correctness verification seems to have been done by reimplementing NaCl in CAO, a language made during CACE for this purpose. It is not a 1:1 match to C, from what I can tell, so this does not imply correctness of the C code.

However, none of this is verifiable at all. The NaCl code has no VCC or Frama-C annotations, or associated CAO; there is absolutely no mention of this in the NaCl page. So we are forced to take the authors of that report at their word, if we want to believe them at all.

[1] https://www.cspforum.eu/documents-reports/description/cace


I must say, as a non-participant, I am slightly disappointed in how the deliverables of the CACE project were handled.

Nevertheless, one thing that is “left” from the verification effort is hidden options -experimental-mem-deps and -experimental-path-deps for Frama-C. These are on the automatic side of Frama-C analyses: they require a value analysis to pass on the target code, but they do not require it to be precise. And it is fun to see input length and error conditions pop up as things the execution time depend on (and in the case of a typical AES implementation, the input buffer itself):

http://blog.frama-c.com/index.php?post/2011/12/31/Do-not-use...


Sounds like DJB's job could get much easier if he adopts Rusts once it's stable.


This is like a comment from a Markov generator.


How about an inversion of the Turing Test, where human contestants try to convince judges they are a Markov generator? Since it's difficult for people to generate randomness, this should be challenging enough to be interesting. (Not claiming it's intellectually interesting to participate in, however.)


I can name a couple credible contestants from this site (obviously, if there was an Asshole Test, there'd be no question who'd win it from HN).


Would that me positively or negatively correlated with karma? ;)


Isn't misuse-resistance a property of the interface alone and doesn't it consequently transfer to libsodium? No argument on the other counts.


A community should raise money to have libsodium audited by professional crypto auditors. I'm not sure appealing to the ease of implementing DJB's theories is a good idea given how screwed up a lot of the crypto in the field actually is.

All I'm saying is, in general, crypto is hard, and an easy theory doesn't change how hard it is to get an implementation exactly right.


I don't think this is a great use of crypto auditor time, for what it's worth. Microsoft SecureChannel, Truecrypt, libOTR, GnuPG, PHP mcrypt --- all much more important audit targets.


Ah, good point.

GnuPG hasn't been audited? Huh.


Is that because of those projects being more widespread or less innately trustworthy or both?


Little of both, depending on the project.


No, I'm afraid to use LibreSSL because the AES implementation wasn't done by Dan Bernstein. The issue is not so much with using algorithm implementations implemented by the algorithm designer as with using algorithm implementations implemented by somebody who knows how to program.

In this particular case, there is the issue that the AES implementation in OpenSSL has been shown in the past to be vulnerable to timing side-channel attacks over the network (again, by Dan Bernstein) and as far as I know has not been through the complete overhaul necessary to fix that problem.

I wouldn't be at all surprised if someone found exploitable side-channel attacks in third-party implementations of Salsa20 or ChaCha in the next few years.


TweetNacl was written by DJB, and implements the same primitives. It's a nearly drop in replacement.


TweetNaCl (http://tweetnacl.cr.yp.to/software.html) is dead easy use - just drop the .c & .h files into your code.


Well, it might be dead easy to drop into your code, but that doesn't make it dead easy to use. In fact, it has no documentation (at least not on the page you linked to), so the only way to actually use it is to reverse-engineer it.

I'm a big fan of DJB, but the sad fact of the matter is that his stuff is, at the moment, only usable by people who already know what they are doing.


The documentation is on the NaCl site which is linked: they are compatible.


True, but to get to the docs you have to go through three levels of indirection (Introduction -> NaCl -> one of the nine documentation links on the left) and the docs are not exactly beginner friendly. TweetNaCL is a huge advance in usability over full NaCl (which is a freakin' nightmare) but I still say it needs some better documentation if it's going to be usable by anyone who isn't already an expert.


In most cases you only need a handful of functions - crypto_box_keypair/crypto_box/crypto_box_open for public key crypto and crypto_secretbox/crypto_secretbox_open for private key crypto. It is way simpler than any other crypt library I have used.


Awesome! But that information needs to be on the TNaCL web page, not in an HN comment.


It's a terrible piece of self reflection in that case. His thesis is his stuff never got adopted because of the NSA? DJB is very smart, but he doesn't necessarily play well with others (as tptacek alluded to) and sometimes does some odd things.

As a random example: Nacl, which is pretty good, isn't portable. The portable version that someone made, Sodium ripped out a bunch of the platform specific stuff to make it portable, but because no one knows which parts were security critical and which weren't,there more than some doubts. These kind of things matter.

Put another way, as a piece of self reflection at least he could figure out how to make it harder for the NSA to marginalize his ideas if that's really the reason they haven't been adopted.


It isn't self-reflection. It's a talk he gave to an audience that puts his research into a current events context and provides a backhanded roadmap to the kinds of crypto research topics that are important to the project of securing the Internet against the NSA. As that, it's pretty effective.

It's pretty hard to make a credible argument that Bernstein and Schwabe and Lange did a poor job with Nacl, since the next best alternative is Keyczar, which might not even be maintained anymore. What's the other credible high-level crypto library?


Gutmann's cryptlib (if you stick to the high-level API, which is documented as such).

It's "anciently conservative" (not based on fancy new ciphers, you can like it or not), by a competent cryptographer, and battle-tested.


There aren't. Nacl certainly is credible. But it isn't (at least last time I looked) portable. As I understand this is because of a bunch of x86 assembly meant to generate very fast and constant time crypto code.

This is Bernstein's entire stick. Which is fine, but if you can't use it on ARM and therefor on mobile, you've killed a large area where adoption of new crypto could actually happen. Arguably, a slower constant time cross platform library would have been a better idea if he actually wanted to see it used widely. This is what Sodium claims to be ....but well who knows if that's secure.

And I don't think I agree with you that this is Bernstein merely saying here's why you need what I've done. It's an explanation of why it didn't happen that basically argues that people' are being so stupid about crypto that they might as well be NSA shills. As a narrative it ignores a lot of reasons this stuff didn't happen and a lot of lessons that should be applied in getting it right this time.


Virtually none of the ideas that Bernstein talks about in here have anything to do with people adopting his implementations. Bernstein's packaging isn't why TLS uses PKCS1v15. It's not why elliptic curve signatures almost always require random ephemeral keys. It's not why userland programs scatter broken RNGs throughout their codebases instead of just tapping urandom. It's certainly not why the most popular modern cipher is one that requires either modern Intel hardware or side channel leaks to be performant.

It's also worth remembering that Bernstein has this reputation you're alluding to among Unix nerds. In cryptography, he has no such stigma; he's a world-class cryptographer and I can't find any indication of interpersonal DJB drama in that field at all. I'm almost inclined to attribute the problem to the Unix nerds.


NaCl includes portable C implementations of all its operations, but it's possible that they might not be constant-time on a given compiler. SUPERCOP now has ARM assembly implementations of the NaCl operations (contributed by Peter Schwabe), but they haven't been integrated into a NaCl release yet.

Schwabe has even written AVR versions of many of the NaCl operations, so maybe you can do your crypto not only on mobile but on an 8-bit microcontroller.


I think DJB works hard enough and it's a bit much to expect him to not only work on his passions but also figure out how to get his work and ideas adopted. We're all good at some things and not others.

What I'd like to see is a way for others to support his efforts.

How can I, for example, currently working as a software developer working at a reasonably 'high' level (currently I'm mainly a consumer of libraries and frameworks) push for better quality in the crypto that I depend on, knowing that DJB has at least partial answers to some of the problems?


Hey tptacek, I'm CTO at a young zero knowledge email startup, and while I'm a proficient crypto user, I'd like to learn more about security and cryptography in general. Do you have any tips for me? Thanks!



I noticed that solutions to Set 1 Challenge 1 in languages other than C++ all 404. Does that mean that no one has contributed solutions? If so, how can I contribute?


They received a ton of solutions in a bunch of different languages when they were running this as a closed competition. They said it would take some time for them to add the solutions to the page, but its been months and I think they have given up on it.


We didn't give up on it! I did, however, leave Matasano to start a new company, and that's slowed us down. We will certainly still be posting solutions.

Another problem is that every time we've talked about posting solutions, people have asked us to hold off.

Finally, a quick jab: thousands and thousands of people completed several sets of these challenges --- more than a thousand got through the CBC padding oracle, which (in our challenge) is cryptographically more complicated than the TLS POODLE attack. None of them had solutions. You don't need them either. :)


I thought 1.6 (repeating-key xor) was the hardest one prior to set 6 and I'm guessing many thousands made it through that one!


Set #7 is much harder than #6. And there're rumors about set #8 which has attacks on EC-algos.


Set #6 is also much harder than set #5! Or at least, I thought so, but I haven't finished it or, sadly, worked on it for quite a few months.


We know what the 8 challenges are, but we have to do the work of getting good reproducible challenges worked out for them.


Thank you!


Try this "Crypto I" course

    https://www.coursera.org/course/crypto


Also Udacity has "Applied Cryptography"[1].

While coursera focuses on theory, it is a bit more practical. I'd say they complement each other nicely.

[1] https://www.udacity.com/course/cs387


Thanks, will do!


One of the points he makes is that "if it's standardized then it's broken. So if we were to take his words to the letter, we should be worried at the current work of standardizing Chacha20-Poly1305 into TLS 1.3 (https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-0...) and assume it's somehow broken ?


No.


Your comment could be read as saying that these slides are self-promotional — that is, that they are intended to puff up the importance of the topics that Bernstein has happened to publish work on. A more charitable and accurate reading is that these slides explain why he believed and still believes that those topics were important to spend time on, and why people should do things differently now. (That explains the lack of emphasis on RC4: everybody already knows RC4 is broken.)


I believe the latter, for what it's worth, not the former.


> Application-layer randomness is a bad idea, and, like Nacl does, everyone should just use a single, carefully audit kernel RNG

In Linux we already have this with /dev/random, right?


Yes, modulo "carefully audited", but the problem isn't that it's not possible; the problem is that so few applications rely on it, and that there's folklore encouraging people not to use it.

http://sockpuppet.org/blog/2014/02/25/safely-generate-random...


Are you saying that that article is wrong? I was under the impression that /dev/random and /dev/urandom on Linux were exactly the same thing, but with urandom not being limited by the (very questionable) entropy counter.

I'm pretty sure they are exactly the same on many of the BSDs.


The thing is, /dev/urandom is not even limited by the entropy counter when there is no entropy in the system at all. That was the reason for the RSA factor collisions: The software just used /dev/urandom during boot time when the embedded systems did not have enough entropy available. As a remedy the libreSSL team asked for a Linux syscall with the feature to block if and only if the random pool has not been seeded with at least 128bit of randomness: https://news.ycombinator.com/item?id=8049180


That's a Linux bug with a universally-deployed userland workaround. There's no question that urandom should work on Linux the way it does on FreeBSD, but that's not a reason to avoid urandom in userland software. The death toll from avoidance of urandom is enormous; the death toll from this bug... isn't.


I would say it was both a Linux and a application bug, you could just open an fd on both random and fallback onto urandom or even better supplement random with a prng such as haveged.


"Supplementing" random(4) with a userspace PRNG is exactly the anti-pattern that Bernstein is talking about.


I'm pretty sure they are exactly the same on many of the BSDs.

* OpenBSD: /dev/random + /dev/urandom are the same and use ChaCha20.

* FreeBSD: They are the same (via symlink) and use RC4. Soon will use ChaCha20.

* NetBSD: Unsure. Last I checked, /dev/random was broken/non-functional like on Linux, and /dev/urandom was slow, like on Linux.

* Dragonfly BSD: Somewhat unsure, but ChaCha output can be xor'd with another stream to produce the returned random numbers.

* Bitrig: when it is released, it will use ChaCha20 like OpenBSD.

Related: libbsd currently uses RC4 in its arc4random functions but has publicly stated a future switch to ChaCha20.


I wrote that article, so, no.


if /dev/random decides that it has not not enough entropy sources then reading from it blocks - sometimes it blocks for a minute if the machine has few entropy sources (for example no mouse and no keyboard attached)

That's why most libraries use /dev/random to seed their own RNG, he says that's wrong and that /dev/urandom (which does not block) is scary.

Usually libraries (openssl,JCE) RNG seeds from /dev/random - this is the internal state of the RNGb; internal state is transformed into next iteration by applying SHA-1 and at each turn some initial bits of internal state are returned as random numbers.

of course the library RNG comes from the requirement for a lot of randomness in most protocols, (and for predictable performance, also servers also may need to do a lot of random numbers) of course the article has many points.


Seeding a userspace RNG with /dev/random is exactly the anti-pattern Bernstein is talking about. It's what broke the Android RNG. Don't do that. Just use urandom.


There's a comment about /dev/random and /dev/urandom later at the article.

TLDR - they are broken. He does not even care to explain why it's broken, because everybody knows it.


Re: timing attacks - can these be protected against by simply inserting into a protocol a mandatory random sleep that is strictly larger than the worst case run time of the crypto implementation? For example, if it takes up to 1s to run the crypto, would a protocol that required a usleep() of uniform_random(5s..10s)?

Such a scheme would obscure any variations in timing, right?

I ask because I have been thinking recently about the cost of "realtime", when many use-cases don't actually need it. Bittorrent exploits this idea - multicast is hard and most methods never really worked well enough to become popular. Then bittorrent comes along and gives up on the idea preserving an ordered, realtime-streamable transmission. In return, bittorrent got a lot of other features and became a very useful tool.

As these slides mention - it is easy to be blinded by a supposed need to make everything fast.

(brainstorming) Perhaps something like a modern variation on the batched store-and-forward used in the old FidoNet could be useful for non-interactive needs; instead of trying to save on long-distance fees it tries use ideas like the usleep("many seconds or more") and much larger (slower) key sizes.

/* Given the tone of those slides, I would guess that DJB just saw PHK's amazing "Operation Orchestra" talk? BULLRUN? "PSYOPS for nerds"? The lesson is important either way. */


Apart from the fact that adding noise just increases the measurement cost, the "add random sleeps" solution has two other problems, one small and one big.

The big one: it implies that you actually know what the side channel is, so you can sleep at a useful point. The virtue of relying solely on constant-time operations is that they're constant time by design. You don't have to worry about how the variable-time operations are implemented, because there aren't any.

The smaller problem is that you have to make sure that your noise function is not itself a target of attack, which can be a little tricky because your noise function must be extremely, extremely cheap to compute.

There's also the obvious "non-problem" that reducing a function to it's lowest common denominator of performance defeats the purpose of high-speed cryptography and variably-timed implementations in the first place.


Random sleeps, as you say, have the problem that they don't eliminate the timing side channel, but just add noise to it. However, as robryk points out below, you can eliminate the timing side channel entirely, assuming you know what it is, by sleeping until a predetermined point. In pdkl95's example, you could sleep until 1.1 seconds after the cryptographic operation was initiated, then sending your response. The predetermined point could be determined randomly as long as the random number generator is constrained not to return a point that's so early that the crypto might still be running. But randomness does not help in this case.


You addressed the "apart from the fact" point in my comment, but not the meat of it. I point that out because someone else already brought up the measurement cost issue; my point was that there are other concerns.

Also, my comment directly addressed the notion of a lowest- common- denominator sleep.

I think maybe you meant to respond to a different comment?


No, my response is attached to the correct comment of yours. I directly responded to the two "problems" you brought up, but I will now attempt to do so in more detail, in case the problem is that my comment was not clearly expressed.

You say that adding sleeps only helps if you know what the side channel is. That's true; for example, adding sleeps won't help at all against power analysis, acoustic analysis, or cache timing attacks. But that's also true of other side-channel countermeasures. For example, using constant-time operations (as Bernstein suggests and you endorse) probably won't help against attacks using those other side channels either. Most side-channel countermeasures are only effective against the side channels they're intended to protect, or not even those.

But pdkl95 was specifically talking about over-the-network timing side-channel attacks on a protocol. Adding sleeps will defend against that, even if you don't know which part of your code has variable timing. For example, Futoransky–Saura–Waissbain 2007 published "ND2DB", a feasible over-the-network timing side-channel attack on database indices. Sleeping so that the time in between network transactions does not depend on any of the data being transmitted will defend against ND2DB as well as any timing attack against your cryptosystems, while reimplementing your cryptosystems to use only constant-time operations will not defend against Futoransky's attack.

TheLoneWolfling pointed out correctly that you can only do this imperfectly in the presence of concurrency and caches, because the network response may take longer to send if, for example, your data has been swapped to disk in the interim. That is true, but I think that you can still reduce the information leakage by two to five orders of magnitude with this approach. For example, if you choose 1100ms as the time interval between receiving the request and sending the response, and 0.1% of the time your response is delayed until 1108ms because of paging in a single disk page, you are leaking about 0.01 bits to the attacker per transaction. If your timing varied over about a 50ms range and the attacker can measure latency to within 50μs, you would have been leaking about 10 bits to the attacker per transaction.

Second, you say that you have to make sure that your noise function itself is not a target of attack, which I take to mean that you must ensure that it doesn't leak information either; for example, the predictable cycling of the low-order bits of old "rand()" implementations could tell an attacker whether the process they connected to had handled transactions from other clients in between transactions to the attacker.

While this is true, I agree with you that it is not a big problem. Here's why. As I said, randomness does not help in this case, so you can simply pick a constant time interval, or one that only depends on data the attacker can already observe, such as the response size; or you can use any CSPRNG to generate a time to add to the constant time interval.

I didn't say anything about this before, but I also agree that it's (usually) a non-problem to make a transaction's average-case time the same as its worst-case time, but I don't agree that doing so "defeats the purpose of high-speed cryptography", because high-speed cryptography generally reduces the worst-case time to an acceptable time.

Is that better? Please correct me if I've said something incorrect above.


That's much clearer to me. I think we probably agree more than we disagree.

My point, as you've acknowledged, is simple. I think, for a bunch of reasons which are more and less easy to mitigate depending, it's a good rule of thumb to just avoid key-dependent side channels altogether. If you know what you're doing, you can shrink the risks. I'm dubious about whether they can be eliminated.

But even if one is the kind of person who can argue eloquently and accurately about those risks on a mailing list (nb: not alluding to you here), one is unlikely in my experience to be the kind of person who can actually implement countermeasures to these problems reliably. I've reviewed lots of expertly developed cryptosystems and found exploitable problems. My experience suggests that even good implementors do not tend to design systems from the vantage point of attackers, which squares with my experience of every other kind of software security as well.

A caveat to all of this is that it's easy for me to opine about what I think good conservative rules of thumb are because my technical depth doesn't extend to safely implementing variably-timed crypto operations. A subject matter expert might be able to refute a lot of what I think.

Because this is a fluffy comment, a quick shotgun blast of technical responses, none of which beg for rebuttals (but feel free):

* I can think of cases where randomized delays on one code path failed to eliminate timing leaks on other code paths that turned out to be relevant to attackers.

* I'm skeptical about exploitation of microscopic timing leaks; I don't even worry about memcmp timing, really. Other people are less skeptical, and there's a line of research pursuing statistical filtering and heuristics to bring those timing channels into reach. Also, there are some secrets that are attackable essentially indefinitely.

* My thoughts about worst-case timing are probably poisoned by client reactions to the idea of slowing down crypto operations at all.


I agree, except that I'm not skeptical about exploitation of microscopic timing leaks.


Ok, thanks! That was as very clear explanation!


Adding random sleeps does not prevent timing attacks. It just adds noise making them a bit more work to pull off.


I suppose one (not very good) approach to "patching-in" constant run-time, would be to limit the runtime of every iteration to be that of the upper bound (possibly +some constant). I suspect that's pretty hard to actually do, and even if accomplished, would open up to other side-channel/timming attacks (eg: check cpu load, and infer timing).


But, if you have an upper bound on the time of the operation, you can wait until the bound elapses, so that your operation always takes that long.


The problem is: good luck getting an upper bound. What happens if the process is paged to disk, for instance?

No matter what upper bound you set you'll still end up with it going over at some point - and as soon as you go over the upper limit you're leaking timing information again.

This also ups RAM requirements, drastically in some cases.

And it isn't even necessarily secure against timing attacks! Because the thread scheduling by the OS changes depending on how much work is being done, and so it could take less or more time to retrieve the saved response depending on how long another request takes to process.


So subtlety and obfuscation to make good ideas sound bad (serpent), and bad ideas attractive (in the interest of performance and practicality).

So when any standards body recommends a particular cryptographic approach, we should ask why, and then why again.


Currently they say performance and performance again. Sometimes they say nothing. Sometimes there is a good reason, sometimes there isn't. Sometimes they want to protect, sometimes they want to attack. That's the problem when you give one institution the job to do both.


A previous version of this slide set has been published and discussed here:

https://news.ycombinator.com/item?id=8023812

I did put up a version of those older slides in markdown format on Github here:

https://github.com/pflanze/slides-djb-20140710/blob/master/s...

(Perhaps someone feels like forking and merging in the changes?)


Very eerie. Scary to imagine that the very systems we rely on are fundamentally flawed even more so than generic "NSA back doors," but down to policy of implementation.


They keep saying China is a big enemy, so it would not be surprising if they have growing resources to start exploiting these kinds of vulnerabilities as well. Should stop backdooring stuff in the name of terrorism because the economic and political damage of enemy states with growing IT resources is far worse IMHO.


Dan, your sliders are hard to read online, because you make us read the same slide four times, just for an extra sentence on each. It's distracting. So please just put together the "full" slides when you put them online.


The supposition here is that maybe Bernstein didn't consider that raw presentation foils were harder to read online than slideshows?


Most PDF readers have a presentation/full screen mode for this (even pdf.js in Firefox does). The nice thing about them is that PDFs work nearly everywhere (if they are made that way, which in this case appears to be true).


I've put them on Kivo, which might be easier to view and so people can annotate them.

https://kivo.com/p/h985rFcI


In other words, all crypto is broken?


Technically, I think it is only saying that all crytpo you know is not broken is broken.


All, before implementation.


I don't think that most of the problems stem from the NSA, most of the problems stem from the fact that security is hard, and we don't culturally have a proper appreciation of this.

We love complexity, we love adding features and wrappers and creating libraries, and when it comes to security this complicates things very quickly. You end up with massive specifications, layers of dependencies, and lots of code. And in many cases a single bug in the code can compromise the entire system.

In security, complexity is very much the enemy and we don't apply that mentality forcefully enough.

The crypto world is evolving quickly, and new vulnerabilities appear. Backdoors appear, ciphers get compromised, this or that RNG is shown to be flawed, etc. And when something that your application uses gets broken, are you going to know? If you do know, are you going to be able to swap it out for something more secure, or is that going to cause a lot of pain and dev time?

Lots of applications are going to prefer the single patch hotfix to the system redesign. And this introduces more weaknesses, because often the hotfix only addresses the single attack, and not the core vulnerability that made the attack possible. Future attacks may be harder, but they haven't been made impossible. (But... it's good enough, right? It'd be like, way harder now to create an attack... right?)

Building properly secure applications is hard, because it requires you to be minimalist, modular, and it requires you to pay attention to the latest security problems and be aware of the types of problems you can introduce when you implement something. Security is incredibly nuanced and you have to be properly aware of all the nuances. A single weakness in your entire stack can affect your whole system if you haven't carefully configured your applications.

Crypto is broken because crypto is hard, and application builders will take the easy way out, often without realizing that they're taking the easy way out. Although the NSA has been shown to be an issue, most of the problems seem to come from our own ignorance when trying to build secure applications, and there's no silver bullet.

Creating reliably secure systems is only possible when working with simple systems. Right now, the Internet is built with a ton of complexity. Operating systems are built with a ton of complexity. There are many libraries with different implementations and they all interact together in ways that are difficult to track. (IE shellshock... who would have thought that Bash would be vulnerable?). We aren't going to have real security until we melt all of this complexity away, and until we train ourselves to view complexity as an enemy, and until we culturally appreciate how difficult and nuanced security is. And that's going to take a long time and a lot of effort.


Does it seem ironic to anyone else that this was served over HTTP?


Crypto insecurity is just one of many concerns and it is missing somewhat of the point. There will always be debate over it.

There is a chilling effect that big tech companies in the US have been working on side channels for nearly any protocol that is ever developed. There needs to be research into side channel resistant crypto as well.

But what about the bigger growing trend in computing that is causing more troubles (dependency growing system wrappers)?:

Has anyone else noticed the trend for wrappers over increasingly larger and larger parts of operating systems? For example systemd (https://en.wikipedia.org/wiki/Systemd) while good for providing automation provides even more system administration power by controlling package managers.

Docker contains configuration information for the full app that is being deployed contained within that container as well. It will take quite some time to make docker more secure, and using it may contain some level of attacks.

And hypervisors as well are larger and larger in size as well. There is becoming an increasing standard for the amount of code in frameworks, platforms, operating systems and more for interoperability.

While these are good things that need to happen as fixes need to be quicker, they are also bad things in that they provide newer larger attack surfaces. Cloud computing is also becoming more and more rewarding for heavyset computation and sharing of resources. The ambiguity in choice of it all means that there will inevitably be a plethora of choices in the cloud.

The dependencies in frameworks and how we develop software has continously grown over the years as well to the point that people are using things like versioneye: https://www.versioneye.com/ to sort changes in it all.

While diversity is good and the cloud ecosystem is becoming more profitable. Why is there almost no security company focused on making the "cloud" tamper resistant? (i.e. monitoring docker containers and looking for misconfigurations etc even on github) There definitely should be a working effort in that retrospect.

It is also equally problematic that cloud software will be ambiguous in design. Cloud software that is built to be fast may use a custom in-house Just-In-Time code engine for faster database or code execution which may be harder to really address security wise.

In retrospect there are many security companies that are working towards securing the appstores, why should they not be working towards securing/integrating with cloud providers?

There should also be more opensource security anti-viruses besides just clamav. For people to be secure there should be a newer trend into opensourcing specific security modules and working towards a functioning operational level of configuration/management security as well as malware detection.


> Try to scare implementors away from constant-time software. e.g. “It will be too slow.” “It’s too hard to write.”

More generally, try to scare implementors away entirely. "Never roll your own crypto." Leave it to the experts -- i.e., NSA.


Erm, no.

There are good reasons not to roll your own and none are to do with the NSA. One is that your own is likely full of flaws that you can't see but an attacker will spot in seconds.

It's easy to write crypto you can't break yourself.


Not so much "easy" as "inevitable". Everyone can write crypto they can't break themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: