HN2new | past | comments | ask | show | jobs | submitlogin

One heartening aspect of the Snowden revelations as a whole is that they have pretty much just confirmed that the things we thought were strong (public crypto research, tor) are in fact strong and the things that we thought were iffy are in fact iffy(Certificate Authorities, Unvetted Crypto, Cloud Services, The Wires, Implementations). This bodes well for the prospect of navigating out of this whole mess successfully since on the whole we seem to have good instincts about what is trustworthy and what is untrustworthy. I think that it actually has tended to clarify thinking about security so that fewer and fewer engineers are able to delude themselves into trusting something that they know deep down is really untrustworthy.


One iffy part I would like to add is government itself. It was generally thought that government would not keep security vulnerabilities hidden, prioritizing to protect citizens rather than having a minor advantage in hacking.

Together with the earlier leaks regarding sabotaged security standard, US government is the most damaging entity to computer security today. Anything they do need to be viewed under the understanding that NSA primary priority is to be able to hack other peoples computers. Be that a encryption algorithm, or a kernel module, NSA priority is 100% clear.

That used to be a tin-foil hat idea just a few months ago, and we know better now. If NSA comes carrying gifts, it warrant being very careful in accepting them from a party with such hostile priorities.


> That used to be a tin-foil hat idea just a few months ago, and we know better now. If NSA comes carrying gifts, it warrant being very careful in accepting them from a party with such hostile priorities.

Well, not really.

The "tinfoil" idea is that NSA is breaking into crypto so that they can blackmail politicians, black-bag innocent citizens, etc.

But it was never widely assumed that NSA wasn't trying to break every bit of encryption they could. Besides the fact that such activities are literally their job, it's one of the few things they'd just as likely tell you directly if you asked them.

"Q: Are you trying to break cipher/cryptosystem FOO?" "A: Yes, we're trying to break all of them, to protect our SIGINT capability".

NSA has spent literally decades analyzing and breaking the military-grade ciphers of other nations. So I don't know where people got the idea that just because civilians obtained access to military-grade encryption, that NSA would suddenly stop with cryptanalysis efforts. But it has nothing to do with civilians per se; the military and national security opponents are using our civilian crypto too!

Is that inconvenient for civilian cryptography? Sure. But let's not act like people are having something chipped and taken away from them, that they've always had.

Before RSA and DH there was essentially no widely-known safe cryptosystems that we could use. You used DES, or you could make up your own Vigenère implementation perhaps (have fun with key exchange!).

And that's just discussing computer communications. Your phones were all tappable, international telegrams easily read if it suited NSA, and good luck if you used one of those new-fangled cell phones.

The claimed threat is that computers make NSA more capable of surveilling the people at large, but the evidence shows that systems like Tor are putting up an exceptional fight, and even cryptosystems like TLS with many known weaknesses mostly work against global passive surveillance.

You would have to get on NSA's specific shitlist to have to really worry, but being on that shitlist 20 years ago meant anything you said would be picked up... and now, even that is not so certain.


There is a difference between trying to break cryptography, and prioritizing breaking cryptography over protecting civilians.

This is true for almost everything in the world. I want for example that the police try to stop criminals, but I do not want them to go around with minigun's and spraying the street with bullets. I want the police to prioritize the safety of civilians.

Same goes for NSA. They are perfectly free to try break hostile entities encryption, but they should not sabotage US civilians security while doing so. When they sabotage standards, or keep vulnerabilities secret so they and criminals can break into peoples computers, then NSA is not prioritizing protecting civilians.


> When they sabotage standards, or keep vulnerabilities secret so they and criminals can break into peoples computers, then NSA is not prioritizing protecting civilians.

Even the standards that they have been shown to sabotage (Lotus Notes, Clipper, Dual_EC_DRBG), they have sabotaged it in a way that should have reduced the security of the system against NSA, but not in general. I'll note that I disagree with this concept (I'm not a mathematician but it seems to me that it is difficult to prove theoretically that the NSA private key could never be derived when you know the plaintext and ciphertext). However even on these NSA was trying to maintain the security of the cryptosystem itself, it's not as if they introduced a deliberate backdoor where the thing falls apart if you guess the right 8-letter password.

I see your point about knowing about software vulnerabilities and not acting on them. But the problem is that software will always have vulnerabilities, and the citizenry at large isn't exactly good at keeping always up-to-date. So if NSA divulges every 0-day they know, then they don't help the public that much, but do help the enemies of the public protect their software that much better.

You could almost argue that the NSA "buying up 0-days" is directly beneficial to the citizens, by ensuring that at least those vulns don't end up in the hands of someone who'd actually do something rotten with them.


> something rotten with them.

Like spying on us?


They're doing it to spy on the rest of the world, which is something that they've done for their entire existence. It's one of the two major reasons they exist at all.

It happens that now the rest of the world is using the same crypto we're using, but that's not NSA's fault. Nor is it a major degradation over a status quo; the government has usually been able to "spy on us", it's only been a short time comparatively speaking that it was even possible for the average citizen to completely encipher their communications. Telegrams, for instance, were copied and read as a matter of course if they crossed international boundaries.


The NSA shouldn't just be an attacker it should also provide defence. If one of their many contractors can leak details to the press for idealogical ends it's pretty safe to assume that much worse secrets have already been leaked to other nation states (China, Russia etc....) for financial gain.

I think it's entirely reasonable to assume that a lot of exploits the NSA has discovered and not revealed (because it thinks they are "secret") have actually been sold to other governments by it's own contractors. By not revealing these exploits to citizens they are actually leaving them open to attack by foreign governments. Large companies trying to defend against industrial espionage are probably most at risk.


> The NSA shouldn't just be an attacker it should also provide defence.

Uh, it actually does exactly that. That is the second major mission objective of NSA, is to ensure that the USA's own communications are secure. For example, the SHA-1 hash standard that underpins much of our cryptosystems was developed wholly by NSA as an alternative to MD5 (which was apparently even at the time thought to be weak at NSA).

However there's a difference between ensuring that the theoretical underpinnings of COMSEC are adequate and releasing 0-days. There will always be exploits in web browsers used by people, so NSA is not "helping the citizens" by releasing each and every one of those secretly to browser developers. They can effectively only hamstring them own mission goals by doing that.


If one of their many contractors can leak details to the press for idealogical ends it's pretty safe to assume that much worse secrets have already been leaked to other nation states (China, Russia etc....) for financial gain.

Especially as the agency in question appears to have no compartments or levels of access. I've been wondering how a comparatively junior contract worker could access so much information...


They're very compartmented, as it turns out.

But Snowden was a sysadmin and successfully managed to digitally impersonate persons actually in the right compartments, among other things, in order to get access to the data he wanted.

I suppose it's better to say that NSA is too reliant on contracted systems administrators to handle what should be inherently governmental functions, and that they don't properly compartment sysadmin functions. But then again, is it even possible to completely protect a computer network against an insider sysadmin threat?


Unfortunately it's politcians in 6 countries who try to dismantle the now totalitarian levels of surveillance who end up on this shit list too. Then soon it will be you and me.


The NSA gives a lot of advice to civilian cryptographers. It used to be tinfoil to assume the advice was deliberately bad. Now we know (some of) it is.

The NSA also has been found to give good advice sometimes, so just doing the opposite of what they say doesn't work either.


> It was generally thought that government would not keep security vulnerabilities hidden

Was that what people thought? Were there vulnerability reports in open-source software that were coming from the NSA or thought to be coming from the NSA? Surely everyone knew that the NSA was capable of finding exploits in software, and I would think that it would be hard to keep secret whether or not they're being reported.

> That used to be a tin-foil hat idea just a few months ago, and we know better now.

It's well-known that the NSA pushed to have DES limited to 56-bit keys. There were suspicions about Dual_EC_DRBG long before there were any leaks from Snowden. In the 90s, they pushed the Clipper chip, in which they'd engineered a back door. I think that everyone understood that the NSA had somewhat of an interest in weaker cryptography. That's why the cryptographic standardization processes happened in the open and when constants were needed, they were taken from the digits of pi or some such sequence.


There's a video from the RSA conference in 2011 with Dickie George, who was the director for Information Assurance at NSA when DES was being reviewed. He claims that the agreement between NIST and NSA was that: 1, NSA would only change things if they could find a specific problem with the cipher, and 2, NSA promised that DES would have security equal to its key size. The implication is then that they decided that 56 bits was how secure it was, and then picked that as the key size.

You can believe him or not, but I don't see any particular reason not to.

link to the video, the relevant bit is ~8min in: http://www.youtube.com/watch?v=0NlZpyk3PKI


Thanks for the video. I ended up watching the whole thing. My interpretation is a little different than yours: I think George is saying that there's no point in having a key longer that 56 bits given that the goal is 56 bits of security, but he's vague about where the requirement for 56 bits of security came from. In any case, the video certainly supports my larger point that the idea that NSA would sabotage a crypto standard was mainstream within crypto circles, even in the '70s.


> It was generally thought that government would not keep security vulnerabilities hidden

It depends on which they find it on, according to this talk, https://www.youtube.com/watch?v=E4Zx5rQFk4U , If vulnerabilities are found on secure systems they are immediately classified, For them to be able to report they have to refind and document the vulnerability on a non secured system.


DES was about speed. DES was in an age when computers were slow, DES was slow, and the NSA was already helping defeat a cryptanalysis attack on it.

Limiting the keys to a sensible number means it can be used in a practical sense.


At a DefCon (15, I think?) I got to ask a panel of FBI/CIA/NSA bigwigs a question at an open Q/A panel. I asked how they made the decision of which exploits they'd keep for themselves and which they'd help the project patch.

The response was 100% boilerplate. "We have a system for evaluating it," was the basic answer, in more words than that. I didn't really expect anything more, but it was worth a shot.

I've never believed that their "system" was in any way primarily for the public interest. I can't point to any specific evidence, it just never felt like the type of thing they would do. Good exploits just seemed far too useful to be worth giving up.


I never thought that. I always assumed all cyber-war capable governments had hidden caches of 0-day vulnerabilities.


Prior to this one should not have (and arguably should sill not) assumed Tor is safe against the NSA.

Tor was explicitly not designed to protect against a global passive adversary. That's the price it pays for low latency. With the amount of network data the NSA has, they probably constitute such an adversary.

It is actually rather surprising that Tor gives them this much trouble.


> It is actually rather surprising that Tor gives them this much trouble.

I am not really convinced that what we have seen demonstrates conclusively that it does. There is the possibility that we are looking at parallel construction, or that these attacks are genuine but they are sitting on more dramatic capabilities for targets they think are worth it (perhaps because the Chinese continuing to trust and use Tor is a better situation for the NSA to be in than the Chinese doing everything the old fashioned way with microfilm and dead-drops).

The best way to go forward is to continue to assume that Tor does not present any significant difficulty to the NSA.


It's a question of opportunity cost. The NSA has extensive resources, but it's unlikely that they can employ overwhelming resources (such as would be theoretically necessary to break tor) for every situation where overwhelming resources specifically directed are a theoretical weakness. At the moment, implementations are a much easier target, and so I don't necessarily think that it's surprising that they do have trouble with strong but imperfect systems like tor.

Perhaps once all implementation issues are removed from the security equation (I'll hold my breath while I wait...) it will be necessary to think up better systems. But right now, what's hard for us is hard for the NSA, and so that should be the guiding principle for strengthening current systems and developing new ones. I find that an empowering idea.


Yes - exactly. Opportunity cost is something that is not discussed enough. Conceivably, any "target" is vulnerable to every communication at the right price point. From technology solutions (provided by NSA), to in-field solutions (provided by CIA), we shouldn't believe that we can be totally "safe" from unwanted eavesdroppers.

It's not "if" Tor (and friends) are vulnerable. We should assume and operate like they are, but with some level of acceptable tradeoff. It's like a safe or ATM - neither of these guarantees perfect security; they just provide enough security for the expected loss of their contents.

The problem - it's just very hard to evaluate the opportunity cost, since we don't really know how wide-spread or "easy" it is for privacy to be breeched. These types of revelations help establish the "market price" for which we can use as a basis for evaluating our options for communications (including traditional man-to-man transport).

I personally don't have any communication which I consider privileged enough to warrant the extra hassle of running Tor, etc. I consider a TLS connection with my bank secure enough for my concerns and I don't have the desire to pull otherwise questionable content from any type of onion router. Therefore, I enter the market with a different expectation of features and cost I'm willing to pay.


The weak point as usual are the endpoints. The attack vector described in these documents is JavaScript via some library called E4X. Makes me wonder why Tor bundle doesn't come with NoScript enabled by default.


There is an answer about this in their FAQ that basically states that having NoScript on by default breaks too much of the web.


Utopistically, how nice would be if the whole web provided no-javascript versions of the sites? In the end 90% of the cases javascript is used just to do fancy things, while actual functionalities could be achieved with much less pain (and vulnerability).


I think this would have been true a few years back, but I think more and more web-sites are using javascript in irreplaceable waves. I suspect javascript will become increasingly necessary as frameworks like angularjs become more popular. That said, if you are just interested in buying a pizza, or reading a blog post, then maybe javascript will never be really necessary.


I agree with your post generally, but has Snowden said anything about CAs? I did expect to hear that at least one has signed anything the NSA put in front of them, but I don't recall Snowden providing "proof"* of this.

* I'm in no position to verify anything Snowden leaks.


We didn't need these revelations to know CAs are not generally trustworthy. We already had proof. http://en.wikipedia.org/wiki/Certificate_authority#CA_compro...


The main thing is that CAs are centralized proxies for trust combined with the revelations that confirm that the NSA directly targets such central entities. There was a lot of general uneasiness about the reliance on CAs before the Snowden revelations, and I think the fact that NSA documents show that it leans on such central entities confirms the wisdom of that unease.


I don't know about the NSA, but I've personally negotiated a deal with a CA to add whatever domains we wanted to a certificate without validation. They just "trusted us."


not to be a downer but I do feel these systems and exploits are designed by us the hackers we so much want to belive are good, but it looks like most hackers have a price and probably derive joy from designing these systems for the government.

we know what is trustworthy we know how to build and do the right thing. yet look there is tens of thousands of brilliant minds working for the nsa against everybody else.


The disheartening things is, though, we don't really have novel technologies (quantum crypto?) to guarantee security anymore and the existing ones will soon be exploitable on a mass scale. This is bad for internet commerce, and for internet itself as a medium. In the eyes of the layman, the internet is untrustworthy. I won't be surprised if in the future we will see closed, privately owned physical networks that guarantee security to their customers.


I think Facebook proves definitively that the layman doesn't care about "trustworthiness of the internet".


They do for anything that requires money and beyond. That's why banks etc. try hard to persuade people they have high security standards.


That will be a problem once you can by a quantum computer from the local IT shop, not while their are 5 of them in the world and you need a team of physicists to operate them.


quantum crypto is going to be available to governments first and to civilians later, if at all, ever. we should be be making plans on how to protect ourselves from the cracking powers of quantum computers with our traditional computers instead: http://www.pqcrypto.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: