HN2new | past | comments | ask | show | jobs | submitlogin
Heartbleed disclosure timeline: who knew what and when (smh.com.au)
307 points by dctrwatson on April 14, 2014 | hide | past | favorite | 104 comments


I still like how these timelines are all about the big tech companies and not about governments, services and banks etc. These are arguably where some of the biggest risk targets are actually at (see today's post about Canada government's Revenue Services (tax agency) loosing people's info to heartbleed).

We probably need a stronger web security system people can be on. Also some of the blame falls to the big companies, banks, telcos etc themselves, I mean who wants to report security flaws to Telcos in a world where they then turn around and instead of giving you a $15k bounty, send you to jail like Weeve. And ok that's not the best comparison because he went well beyond discovering and did exploit as well. But none the less, some of the big corporate world need to clue in and get more friendly with tech rather than hostile, because right now they don't have a very good reputation and so then tech leaves them out in the cold. Did anyone think about notifying telcos and banks much? or just other big tech companies.


In a post-Snowden world the assumption is now that any vulnerability shared with a government is fair game for the government to use offensively.

It wouldn't surprise me if plenty of people who would have previously reported a vulnerability to the government or local CERT are no longer willing to do so.


In a pre and post-Snowden world, I would imagine if a government discovers a bug like Heartbleed and has no evidence of anyone else knowing about it, they will classify it and use it offensively. They will then collect evidence and construct a model of who else knows of the vulnerability and weigh the costs (to their offensive capabilities) and benefits (to everyone's defensive capabilities) of disclosing it publicly.

Once there is reason to believe that the benefits outweigh the costs, they will disclose it in a way that doesn't expose their knowing about it beforehand.


I would imagine if a government discovers a bug like Heartbleed and has no evidence of anyone else knowing about it, they will classify it and use it offensively.

You seem to think "a government" is a unified entity. If the NSA (for example) discovered Heartbleed I'd expect you would be correct.

If someone in the IRS (for example) found it I'd expect they would behave very differently.


I believe standard procedure would be for IRS to report it to US-CERT who are part of the Department of Homeland Security.


I believe standard procedure would be for IRS to report it to US-CERT who are part of the Department of Homeland Security.

At which point (I imagine), decisions would be made along the same lines as I described, the key questions being "can we keep it secret?" and "who has this capability?".

I tried to be nation-agnostic, as I imagine there may be a bit of difference in how a given nations' intelligence agency weighs the value of their offensive capabilities and the public defensive capabilities in light of who knows about the bug.


Too bad we all know which agency is far more likely to discover such a vulnerability.


Basically, governments are rational actors. Not really surprising but many people seem to assume the government will be either irrationally benevolent or irrationally evil.


If 100% all components of government behave rationally, it doesn't neccessarily mean that the government behaves rationally.

A collection of rational actors may, as a whole, behave irrationally - as defined as 'against their own best interest'. Tragedy of the commons is but one example.

Organized systems may easily have structures that create irrational and destructive behavior, and those structures can be stable enough so that for each separate employee attempting to change the system would be risky, disadvantageous and irrational.


I think nl (sibling post) is more accurate: "governments" don't exist as such. You have lots of agencies with conflicting interests, and they're eventually all made of people.


> Basically, governments are rational actors.

Citation needed. :)

There are two schemes that assure irrational governmental behavior -- dictatorship, and democracy. For different reasons, of course.

> Not really surprising but many people seem to assume the government will be either irrationally benevolent or irrationally evil.

There's plenty of historical support for those views -- you know, evidence?


I would say that rather than "the government behaves rationally" - that every individual behaves in their best interest.

High-ranking intelligence officials will push for more power, more capabilities, and will scrap things if they become risky.

If there's somebody sufficiently powerful whose career would be threatened from some risk of exploiting the bug, that risk will be taken seriously.

If there's somebody sufficiently powerful whose career would benefit from the government letting people know about that bug, it'll happen. This sounds unlikely, but this isn't my area of expertise and if somebody wants to correct me please do so.

Congress will largely ignore it because it has nothing to do with their constituency, they're mostly technically illiterate, and because they're being lied to anyways.

...and so on.


> every individual behaves in their best interest

Daniel Kahneman says this has been proven false in his book Thinking Fast and Slow.

It think it is wrong on two levels. First individual's "perceived best interest" is often very far off of their real interest. Second, individual, and not only a few edge cases, are often not behaving in their best interest, even if we consider it to be their "perceived best interest".

Kahneman listed the three attributes of humans modelized as "econs" in the most common economic model: They are rational, selfish and their taste do not change over time. All three are obviously wrong.


You're right - I should have phrased it "perceived self-interest mixed with doses of human volatility/irrationality"


What might appear to be irrational behavior to you is just due to people making decisions based on different information, differing priorities and a different decision making process informed by different life experience. If someone's decisions appear irrational to you, you just don't possess enough data about the information they have access to, their priorities and the world model in their head.


So basically what you're saying is the word "irrational" has no meaning whatsoever, and that every act is rational, just perhaps not to your point of view or level of information.

In the theoretical sense, I agree with that, but in a world of social interconnections, we need a meaningful framework and vocabulary for judging these sorts of things. While I believe that true objectivity is hard (and maybe impossible), I think there's a lot of social value in coming up with widely accepted definitions of rational vs. irrational, at least on a case-by-case basis.


>what you're saying is the word "irrational" has no meaning whatsoever

It has meaning. It means "we have insufficient information" or "something is going on here that we don't know about or understand."


I think you are mixing up "rational" and "reasonable". It is not rational to flinch away from that puff of air you get at the optometrist, but it is reasonable.


Rational and reasonable are actually somewhat synonyms (the first definition of rational on dictionary.com includes "reasonable").


Why is the assumption "governments are rational" more reasonable than "governments are irrational"? Both seem equally possible to me, given the fact that we don't have the information to tell, either way.

In any case, that's a false dichotomy. Governments are not uniform entities, and there's no point talking about them as if they are. Governments, like people, do some rational and some irrational things.


Because it takes quite a lot of work and training for individual humans to act rational. Therefore, without other information, it is a better default assumption that any given human or system-of-humans is non-rational.


Another angle to that is that telcos and banks are localized services (none have high % market share across the globe) that people use less frequently than email. Whilst they carry potentially higher rewards for blackhats, in an aggregate effect on society, a single bank not being secure is probably less noteworthy than a globally used service like Gmail or Facebook not being secure.

Also, banks only deal with money, whereas in certain locations and situations, lives potentially depend on Gmail, Facebook and Twitter being secure. (oppressive governments and such)


As soon as anyone knows, they are going to use that info however they see fit. Probably getting their own house in order before spreading the news, to not put themselves at risk.

However, even though spouting a conspiracy theory is a faux pas here, I can't help but wonder if the "he who smelt it, dealt it" rule applies here. Lets say your country were to setup a network interconnecting major research institutions, etc. After its use takes off and it is obvious that everyone is going to be communicating over this new medium in a short amount of time, you see the value in keeping tabs on people. You decide that it is in your best interest to put backdoors into encryption algorithms in enterprise communication software. So you see there are these guys that have become the place that everyone is starting to go to find what they are looking for. This is a good place to be. You eventually get your hands on this also. It's a waste of time and energy to constantly be decrypting everyone's messages, so what the hell- let's put a backdoor in that also. Everything is going well. Wait... ok, we should have thought of that. Another country now knows about this vulnerability and it hasn't been publicized, which means they will start using it to spy on contractors that work for us. We'd better leak this information so everyone fixes their hole. Let's tell Google. We're already on good terms with them.


The full timeline doesn't make big tech companies look particularly great. Heartbleed was a bug that came in with a questionable implementation of a questionable feature. It sailed through standards bodies and OpenSSL itself. A sensible explanation is that these are underfunded, understaffed efforts.

But next, the feature went live on the servers of more or less everyone, including Google and Yahoo and Amazon. People who employ and, presumably, well-compensate many experts in security and SSL implementations. Still, the code marched on, unnoticed, undisabled, deployed. How did that happen?


Big tech companies, small ones, and OSS folks, all write bugs. How does it "not look particularly great" to do something that literally every person writing software does. Wait, not even to do it. To miss the error in an obscure change in a backwater part of OpenSSL that no one uses. It does not strike me as likely that these companies review every change to every possible piece of sensitive software. The volume of work would be far too large.


I think the broader point being made is that practically anyone could come along and write code that then gets widely deployed. They just have to pick the right project, build up a bit of trust, and then submit a subtle but intentional bug. It's scary how easy it might be. It's not very clear how to defend against this.


A report suggested that the NSA already knew about the Heartbleed bug 2 years ago and took advantage (http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-us...)

This suggests that first the NSA discovered it, then Neel Mehta, and finally Codenomicon. As other commenters have rightly suggested, the government, or at least an agency there of, uses information asymmetry as a weapon of intelligence on its users (aka citizens). In the case of private corporations, they have a more contractual duty to their users to ensure that these bugs are patched so that those with malevolent intentions can't harm their users.

In my opinion, a bug that has been discovered but not shared is just as bad as a weaponized biological agent. Each rely on secrecy to exploit a group that is unaware.


That article contains exactly 0 facts. It is complete FUD. The only evidence they have that the NSA knew about it is an unquoted statement that "two people familiar with the matter said".

I am not saying it's impossible the NSA knew. But people throwing around that article as evidence is absolutely laughable.


Likely quite a few people between the (two confirmed and one rumored) discoverers as well.


Most banks weren't vulnerable to Heartbleed.


Has anyone asked how two security researches supposedly found the same exact bug (that's been in the code for 2+ years) within days of each other? How likely is that?


It happens routinely. Allow me to illustrate: on a pentest a couple years ago, Vitaly McLain from our Chicago team found that if you accidentally slipped a NUL into a header that nginx reverse-proxied, nginx would short-copy (it used strncpy) and reveal server memory. We reported to our client, and in the time it took for them to OK our upstream report --- a matter of hours --- someone else reported the same bug.

What's funny about that is that the nginx bug is functionally identical to Heartbleed (modulo that it only worked on nginx).


I've been reading So Good They Can't Ignore You and there's an interesting section on this phenomenon:

“The Baffling Popularity of Randomized Linear Network Coding

As I write this chapter, I’m attending a computer science conference in San Jose, California. Earlier today, something interesting happened. I attended a session in which four different professors from four different universities presented their latest research. Surprisingly, all four presentations tackled the same narrow problem—information dissemination in networks—using the same narrow technique—randomized linear network coding. It was as if my research community woke up one morning and collectively and spontaneously decided to tackle the same esoteric problem.

This example of joint discovery surprised me, but it would not have surprised the science writer Steven Johnson. In his engaging 2010 book, Where Good Ideas Come From, Johnson explains that such “multiples” are frequent in the history of science. Consider the discovery of sunspots in 1611: As Johnson notes, four scientists, from four different countries, all identified the phenomenon during that same year. The first electrical battery? Invented twice in the mid-eighteenth century. Oxygen? Isolated independently in 1772 and 1774. In one study cited by Johnson, researchers from Columbia University found just shy of 150 different examples of prominent scientific breakthroughs made by multiple researchers at near the same time.”

Excerpt From: Newport, Cal. “So Good They Can't Ignore You: Why Skills Trump Passion in the Quest for Work You Love.”

His thesis on this is basically that these discoveries depend on a lot of other things occurring first and that once those things have occurred, anyone looking in the right place will see it.

With all of that said, I remain skeptical, at least in the Heartbleed case (they're just SOO close together). tptacek has more experience in these things than me of course, so I'll defer to his thoughts.


> It happens routinely.

I think that still leaves open the question of how likely it is, or perhaps the question is better phrased as whether it is due to chance or not.

It is extremely unlikely to be due to chance in this case. It was out for years, but discovered independently within days of each other.

One possibility is that it was discovered independently many times, but we only find out about it after someone actually goes public. At that point, those finding it earlier either never wanted to mention publicly that they found it (or they already would have), or even if they want to, they would be admitting to finding it earlier but not disclosing, so they'd look bad.

That leaves others finding it slightly after the group that goes public - soon enough that it wasn't public yet, so they have evidence that they actually did find it independently. That would lead to exactly the result we see here, in fact. But this does imply that there were, very likely, multiple other groups that found this earlier but never went public.

The only other option is that it wasn't random, but some event led to both discoveries. Perhaps a hint was around, a new method of finding vulnerabilities, or anything at all that could nudge people's minds in that direction. That option is much less worrying, but very hard to prove.


I think if you reread my comment closely, you'll see that it describes a scenario that is spookily similar to Heartbleed: same impact, same latent bug (presumably in the code for many years, maybe even longer than TLS heartbeats), and co-discovery not in a matter of days but hours.

Sorry, there's probably nothing interesting happening here, except for coincidence.


Yes, it is spookily similar, I agree 100% (I did read it carefully ;)

What I disagree with is that one event happening means that another event very similar to it is likely in a statistical sense.

Of course, it is an argument that supports that to some extent. But it is fairly weak support, when on the other hand statistics strongly imply the opposite.


> What I disagree with is that one event happening means that another event very similar to it is likely in a statistical sense.

Why? It makes perfect sense to me that, when one type of vulnerability is discovered, many more of same type will be discovered very soon thereafter. You have to consider that vulnerability discoveries don't happen in a vacuum. There's a near-infinite number of attack routes that one could investigate, but which one you're looking at now is a product of the environment you operate in.

For example, let's say you're investigating a web server. Then, some security researcher demonstrates a flaw in an image codec where even using "safe" memory copy functions in C leads to a vulnerability if tainted values are passed in for the size parameters. You think, "Hmm...I'm not decoding images, but web servers do copy memory. I should check to see if any memory copy operations are using tainted values." Bam! You discover Heartbleed...but do you honestly think you'd be the only researcher working on web servers that saw the image codec demo and made that connection? Unlikely.


Certainly, yes - that would mean that this is not a coincidence.

I'm not arguing this is a coincidence. Just that if it was totally random, it would be very unlikely. So the plausible possibilities are (1) what you suggested, some common cause, or (2) that the discovery happened randomly multiple times but was only disclosed once.


> It was out for years, but discovered independently within days of each other.

Well, two years, the minimum number of years for which you can claim it was out for years. In order to avoid tricking yourself and your audience I'd stick to "a little over two years" rather than "years".

There were 749 days between a bugged version of OpenSSL being released and the flaw being discovered. There were 13 days between the independent discovery. That is still a small window compared to 749 days but not that small, and during that 749 days more and more people put bugged versions of OpenSSL into their infrastructure, which, I would assume, increases the chances that somebody discovers this flaw.

I would agree with you that it's not totally random -- of course it isn't, it's probably related to the state of the world changing over time. People adopting bugged versions is one factor, progress on security tools may be another, other publicized TLS bugs causing people to look harder at this area may be other, etc


> The only other option is that it wasn't random, but some event led to both discoveries. Perhaps a hint was around, a new method of

> finding vulnerabilities, or anything at all that could nudge people's minds in that direction.

As somebody who has witnessed several cases of simultaneous discovery of security vulnerabilities, this is exactly how it happens. Some vaguely related event happens which causes multiple researchers to all start looking in the same place.


An aside - was it really strncpy? That function has such baffling behavior (it creates unterminated strings on overflow — contrary to its str* name) that I'm surprised it isn't banned by every static checker in existence.


Perhaps not as unlikely as you might think:

http://en.wikipedia.org/wiki/Multiple_discovery

I remember years ago reading about how some researchers were investigating a specific variation of this phenomenon, in which a number of independent and geographically dispersed labs would be, for example, trying to grow a particular crystal. For a number of years none were successful. Then within a matter of weeks of each other, all of the labs would independently discover the same technique that made it possible. This happened so close together that pure coincidence was incredibly unlikely.

Apparently this same phenomenon was occurring far more often than it should in theory (based on the probability of pure coincidences), which is what prompted the research into it.

Unfortunately, I don't know what the final outcome was. It would be amazing if anyone else remembers this and can point out the original source.

Personally I am inclined to believe that it is simply a matter of prior events causing multiple individuals to be thinking about the same things. If you have lots of people searching in the same locality, there's a much higher chance that they will all find the same things around the same time than if they are all operating in a truly random search space.


The most innocent explanation would be pure coincidence, or a coincidence slightly nudged by the fact the recent Apple/GnuTLS bugs are causing an industry-wide look-back at similar old under-reviewed open-source code.

A more sinister explanation would be that evidence of exploitation helped focus attention by each team once it touched them. However, a colleague of Neel Mehta implies that this was an audit-driven discovery without regard to active exploitation (https://news.ycombinator.com/item?id=7558015).

That could still leave the possibility that news of Mehta's discovery leaked, as either a vague hint or as enough info to create larger scans, which then helped tip off the Codenomicon group.

Despite all the reasons for secrecy, non-disclosure, and protection of proprietary methods, I hope each discoverer eventually says more about the steps leading up to discovery.


http://www.openssl.org/news/secadv_20040317.txt -> http://www.openssl.org/news/secadv_20080528.txt -> http://www.openssl.org/news/secadv_20120510.txt -> heartbleed ... a pattern emerges? At least for another one of the discoverers. :) Furthermore "Goto Fail;" is actually cited twice on the http://heartbleed.com and GnuTLS bug once.


Coincidence seems very unlikely, but the other explanations you suggest do seem plausible. Hopefully it's one of those, because otherwise given it's unlikely, it means it was most likely discovered earlier multiple times but not disclosed.


Security researchers are social animals. There might have been a hint on IRC or mailinglist somewhere, and the other team got on it.


You might as well ask how two people (that we know of) discovered calculus independently in much the same timeframe.


"CloudFlare later boasts on its blog about how they were able to protect their clients before many others." I wonder if some companies will boast to potential new customers regarding their relationships with vendors that will offer them advanced patches on critical issues such as heartbleed. Kudos to their opportunistic marketing team, but I hope this trend does not continue.


While I don't care so much who got notified first (any given embargo timeline is going to frustrate large numbers of HN people; if that's a problem for you, start finding bugs or post a bounty), I find several things not to like in Cloudflare's marketing. Near as I can tell, Cloudflare is the origin of the notion that TLS private keys weren't going to be in the heap near packet data, a supposition they jumped to for no reason I can discern.

They didn't find the bug. They benefited from a heads-up on the bug. Then they promoted a mistaken assumption about the bug, along with a gamified challenge site that was in fact a poor vehicle for investigating nginx+OpenSSL (how about, for instance, any decent debugger instead?).

Meanwhile, ops teams at big companies were in heated debates with security about whether keys needed to roll.

There are good people working at Cloudflare and I'm not part of any outrage battalion. I'm just not a fan of how they handled this particular incident.


To be fair, Cloudflare didn't just jump to this conclusion for no reason, they apparently did test this by looking at the location of request buffers compared to the private key, trying to extract it themselves, and investigating the heap layout as a whole: http://blog.cloudflare.com/answering-the-critical-question-c... They probably just got some or all of their assumptions wrong, as many have.


I'm not sure how much of this I buy.

I did what Willem did: I instrumented a small OpenSSL driver program and snapshotted memory. I did not go through the effort Jeremi Gosney and Willem and Thai and Ben Murphey went through to trace things through the code, but it was immediately apparent that there was more going on than the blog post Cloudflare wrote suggested.

More importantly, the whole thesis of that blog post is that RSA private keys are loaded into memory once and never moved again. But that's obviously not true: intermediates based on private key components are created during Montgomery multiplication, for instance.

The bigger problem is not that Cloudflare got things wrong. It's that they (a) marketed the wrong conclusion, and (b) put the conclusion to trial in a way that spent smart people's time unnecessarily.


Here was our initial conclusion from the CloudFlare post you referenced:

============

We think the stealing private keys on most NGINX servers is at least extremely hard and, likely, impossible. Even with Apache, which we think may be slightly more vulnerable, and we do not use at CloudFlare, we believe the likelihood of private SSL keys being revealed with the Heartbleed vulnerability is very low. That’s about the only good news of the last week.

We want others to test our results so we created the Heartbleed Challenge. Aristotle struggled with the problem of disproving the existence of something that doesn’t exist. You can’t prove the negative, so through experimental results we will never be absolutely sure there’s not a condition we haven’t tested. However, the more eyes we get on the problem, the more confident we will be that, in spite of a number of other ways the Heartbleed vulnerability was extremely bad, we may have gotten lucky and been spared the worst of the potential consequences.

That said, we’re proceeding assuming the worst. With respect to private keys held by CloudFlare, we patched the vulnerability before the public had knowledge of the vulnerability, making it unlikely that attackers were able to obtain private keys. Still, to be safe, as outlined at the beginning of this post, we are executing on a plan to reissue and revoke potentially affected certificates, including the cloudflare.com certificate.

Vulnerabilities like this one are challenging because people have imperfect information about the risks they pose. It is important that the community works together to identify the real risks and work towards a safer Internet. We’ll monitor the results on the Heartbleed Challenge and immediately publicize results that challenge any of the above.

============

Which is exactly what we did. To be clear, we were wrong. Our mistaken assumption was focusing on the private key itself and not focusing enough on the exponents that are used to generate the key -- which is what the researchers who solved the challenge were able to obtain. As the wishy-washy conclusion above hopefully makes clear, even when we said it was hard to get the private keys, we were very uncertain and uncomfortable with that conclusion. That's why we posted the challenge. What the challenge did was answer the question definitively: you can get private SSL keys. Knowing that has been valuable for us in deciding to accelerate reissue/revocation process for all the certs we manage on behalf of our customers. Remember: at our scale, revoking hundreds of thousands of certs risked breaking our CA partners' infrastructure, so it wasn't without a cost. Knowing the risk is higher than we originally thought accelerated our efforts which will be complete in the next 48 hours. And, beyond CloudFlare, our hope is knowing the risk is real and proven has benefited other organizations as well.


Look, I'm arguing this because I'm a nerd and because I know you're wrong, not because I think this is a moral crusade or anything. But, once again:

Your "Cloudflare Challenge" (that's what you called it) was not a particularly useful way to answer the question posed by Heartbleed. What you want to know is, "is there private key material in heap memory?", or, to make things even simpler, "are our assumptions about how key material hits heap memory accurate?". The correct way to answer this question is to instrument and analyze an OpenSSL/nginx runtime, not to create and market a treasure hunt for an undisclosed private key on a single site.

You employ smart people. You could have done better than this challenge. Instead, what seems to have happened is that your company got inserted into the middle of a story it had little to do with (correct me if I'm mistaken and unaware of something your team did to research Heartbleed), and, with that spotlight shining, actively marketed a harmful false conclusion about the bug while also bidding for the spare cycles of other people who might have been more effective doing something other than poking at your server. (For what it's worth, I don't think for a moment that you did either of those things intentionally).

I think if you did either of those two things differently --- either didn't publicly go out on a limb suggesting that you thought keys would be hard (or, as Bruce Schneier seems to have read from your blog post, "next to impossible") to recover keys, or didn't set up the game site while doing it --- I wouldn't be moved to comment.

Like I said, not after pelts. Just, if we're putting the Cloudflare response up for questioning, I have some issues to point out.


We spent 5+ days trying to get the private key. We, along with a lot of other smart folks, concluded it was unlikely. Within 11 hours of crowd sourcing the problem we were proven wrong. You may not have found that useful, for us having definitive proof definitely was.


It's interesting you put it that way, because it hadn't occurred to me that your team had an early heads-up on the bug before writing that blog post. So what you're saying is that for 5+ days, the team was working on the assumption that the only time OpenSSL RSA private keys touched heap memory was when they were first loaded.

It seems like you can just look at the code and see how that's not the case. But I might be wrong, too.

It is a tricky thing, being in the center of a critical vulnerability disclosure story.


Since other people seem unaware, I'm going to rip open a personal emotional wound and explicitly tell people: 'tptacek knows exactly what he's talking about because he did in the past, almost exactly, what Cloudflare did right now.

(If you really want to know the issue, google his name with "dns", but it's not really relevant to rehash his mistakes here, except for the fact that in his choice of words on this thread ("They didn't find the bug. They benefited from a heads-up on the bug. Then they promoted a mistaken assumption about the bug") are a screamingly obvious reference to his old mistakes.)

He's not being hypocritical; he's speaking from the voice of experience and having had things blow up in his face. He's trying to stop other people from making the same mistakes he did. When the guy with the big burn mark on his face talks to your chemistry class about the importance of lab safety, you should LISTEN.

"Experience is a dear teacher but fools will learn at no other." Sometimes your elders know what they are talking about.

(Now to re-bandage that wound. Another big piece of advice from another elder: people change and mature, yourself included. You should get over other people's mistakes.)


Yep, that happened.


An expert's hindsight is 20/20. (Mehta's team had the bug 17+ days, and he still tweeted his reassurance on 4/8 with "#dontpanic".)

The challenge result instantly convinced a lot of people who still had doubts, because of the mixed messages elsewhere. The sideshow drama, much like the catchy name 'heartbleed' before it, worked perfectly for its intended purpose.


You're missing the obvious here: they have no idea what they're doing. Which is pretty evident from every aggrandizing post on their blog.


Could you shed some light on how this research was conducted? From reading the OpenSSL source and docs it seems pretty clear that the RSA struct will be on the heap somewhere.


Matt, my biggest problem with CloudFlare by a country mile is the ambulance chasing you guys do in your marketing and your penchant for inserting yourself in the story when you're not even involved. Here, you've done it quite obviously: you got predisclosure and you took the marketing opportunity at the expense of security on the public Internet. You were wrong. But you were the worst kind of wrong: you were wrong in a hurry to get your name in front of everybody first. If you had waited, you wouldn't have been wrong, and we would have been able to answer the question regarding keys without the disinformation.

It really pissed me off because I developed the ability to get keys long before you even wrote the post. I commented about it here several days before you wrote the post[1]. After your blog post, I was accused of fabricating the entire story because you said keys were unobtainable. I cannot, legally, release code without opening myself up to legal ramifications for reasons I won't get into here. Then, after your post, people I've known for a long time accused me of making the entire thing up for a "shot at glory." Meanwhile, I had to explain that your blog post was not definitive to multiple people who were reassured by the false security.

I brought your company's marketing strategy up with you before on HN. Remember when your company jumped on nytimes.com getting owned at the registrar[0]? And you wrote a hurried postmortem of events (I'm assuming, from the typos) without even consulting the affected vendor, then went so far as to speculate on what happened at the affected vendor, and made sure your "postmortem" got on top of HN first?

> An e-mail obtained by Matther [sic] Key, an independant [sic] journalist, indicates that the hackers used a MelbourneIT domain reseller account as part of the attack. While we are only speculating at this point, it's possible that there was a vulnerability in Melbourne IT's reseller systems that allowed a privilege escalation.

You replied here on HN with "no good deed goes unpunished" after I expressed my displeasure with your company's behavior in that scenario, and you didn't really address my points. You basically pointed to the CTO of NYT's praise of your company as evidence enough that you did the right thing. You took advantage of the marketing opportunity (which is fine, I don't fault you) at the expense of allowing the affected vendor to even draft a postmortem or contact customers in a timely fashion (I fault you for that).

We get it. You want to position CloudFlare as a superhero company, capable of fixing the Internet problems that the rest of us cannot handle. However, your marketing strategy has alienated me from ever using your services, and I am not alone in that opinion. Please, rethink that strategy. Focus on the product instead of fixing what you perceive to be a broken Internet that only you can fix. That's the obvious tone I get from your marketing and choices of venue.

[0]: http://blog.cloudflare.com/details-behind-todays-internet-ha...

[1]: https://hackernews.hn/item?id=7551915


I'd actually released code to get private keys a day or two before Cloudflare wrote that blog post on the principle that it was easy enough to do that numerous other people probably already had in private. Didn't help; people still kept concluding that it was impossible to get the private key. Cloudflare were the most publicity-hungry by a little but they weren't the only offender; ErrataSec screwed up pretty badly too for example, as did Akamai and I think one or two others.


The last time I saw a company leverage marketing in this way was when Silent Circle shut down their 4 month old email product in a well-publicized 'stand of solidarity' with lavabit. People didn't really care as much about that one.


Neel Mehta tweeted that private keys wouldn't be in the heap near packet data on April 8th:

https://twitter.com/neelmehta/status/453625474879471616


I think I'm fine giving the guy who found the bug a pass for his 140 character suggestion.

I'm not looking to collect pelts. I just think there was a better way to address the question of private key exposure than "The Cloudflare Challenge". In the future, maybe we can address serious questions with engineering instead of marketing stunts; how about the "let's all work to instrument OpenSSL" challenge?


My issue with the Cloudflare Challenge can be summed up in: no matter what the results are, it will give people a diminished impression of the bug's actual impact. I can't fathom any way in which the Cloudflare Challenge was beneficial to the security of their customers (or anyone else, for that matter), which should be the goal of bounties; not to simply be a PR move.


My problem is that if you're trying to figure out how key material is distributed throughout heap memory, asking people to answer that question about an unknown private key through heartbleed "peeks" is about the most obtuse possible way to find out.


Yeah, the whole thing left a vaguely unpleasant taste. It worked in this case, but now that the marketing genie is out of the bottle it's going to make security vulnerability intelligence harder to evaluate. IMHO.


No, he didn't. He said that it's unlikely, and he's right. It's unlikely that I'm going to get heads 100 times in a row if I'm flipping a coin, but it suddenly becomes likely if I try it a couple million times. Being unlikely doesn't mean you shouldn't assume that it's possible for a motivated attacker. Especially when the barrier to entry is so extraordinarily low, as we've seen repeatedly (Cloudflare and Akamai, I'm looking at you).


Except that it isn't actually at all unlikely with some common configurations, which actually seem to give up the private key on every single attempt.


Fair point -- you're right that it's extremely configuration dependent. For many (especially in the Apache realm) it's trivial and very likely; others (especially Nginx) it's quite a bit more work. But even if he was 100% correct in it being unlikely, it still doesn't justify ignoring the threat, IMO.

I think we're in violent agreement here.


> Facebook say after the disclosure: "We added protections for Facebook’s implementation of OpenSSL before this issue was publicly disclosed, and we're continuing to monitor the situation closely."

I wonder if Facebook found and fixed the bug locally beforehand? Makes me think of all the OSS libraries we use, without bothering to do much more in the way of static-analysis, etc besides simply making sure they compile.


How did Facebook get prior word but not Instagram?

I worry that Amazon wasn't given prior notice according to the timeline.


Amazon Web Services was explicitly listed in the group of companies at the bottom that did not get notification.


It was a day after disclosure our ELBs were patched.


Yes, and I am worried they did not get that notification


How Chromium did not get prior word from Google?


Chromium should not have been vulnerable to the bug, since it does not use openssl (last i looked, it used NSS).

Given that Adam Langley works with the chrome folks, i suspect it did not end up needing it in the end :)


I suspect, given the email gave no details, they had no knowledge of whether it was the same bug or not.


That seems like a Facebook internal issue. No reason why they can't be expected to share info with entities that they own.


How does it happen that after being in the wild for two years the bug was independently found by two different security researchers at roughly the same time? I'm not trying to suggest a conspiracy or anything, just genuinely curious how that works!


Perhaps they read the same article or attended the same talk, and something in the article or talk triggered a similar thought process.


The Apple "goto fail", which affected TLS, and the GnuTLS bugs were widely published just a couple of weeks earlier.



How does it happen that two people ask the same question in a comment at roughly the same time... ;-)


Something something NSA TAO something.


Clearly the need has presented itself for a tool/infrastructure that can safely transmit a security flaw to the necessary parties without there being some arbitrary pecking order as this situation has revealed. With torrents, those who exploit something can quickly share usernames/passwords with the world with little repercussion. The same effectiveness, speed, and risk should also be associated with those who are trying to fix things.


I wonder instead of notifying to select few parties with an embargo, if it would have been a better handled by releasing an encrypted sources with documentation containing the url to high availability server containing keys that serves only after pre-defined point in time. And documentation on integrity verification, accessment of the source changes and implications on other softwares using openssl.


How would that not just be an overcomplicated way to do exactly what the reporters of this bug did anyways?


I think the idea is that everyone would get the info at the exact same instant. It also allows everyone to be "at their computer" ready to implement the fix.

It would mitigate the possibility of it leaking and getting exploited by someone.


That's foolish, and doesn't take into account how software updates are actually rolled out in the real world.

Many vendors will not just simply compile a new version of a library from upstream source and just throw it on their machines. They depend on a tested release from their distribution maintainer, or something along those lines.

Also many vendors aren't prepared to do a simple upgrade: some may have customization they need to forward-port and test. Or perhaps they'd prefer to backport the fix to their older version.

So basically, your "everyone gets info at once" means that blackhats can get the information and exploit it almost immediately, while the good guys scramble to -- much more slowly -- patch their systems.


I wonder how one should proceed (if not working for any of this big tech companies) when one discover as critical bug as heartbleed?


1. Keep a timeline.

2. Ask to speak to a representative of the people developing the thing on the telephone.

3. Between the two of you, figure out who to tell next.

4. Realize that other people may find the bug, too. You have some time, but not infinite time.


It's only fair to publicly disclose immediately. You can't possibly alert every trustworthy company on earth.

Now, if you want a bug bounty, you have to file a report and wait a certain amount of time before you are allowed to disclose.


You don't want to disclose it before releasing a patch.


I doubt if the NSA did know about this bug. Because if they did they would have put the economy at a huge risk.


How has all the other outed operations not put the economy at huge risk?


Because most companies don't care enough. A lot of companies are still hosting in the US.

But hackers overtaking banking for example might be a real risk.


There are many hackers who do not reveal their exploits.

This is an overly optimistic account of who knew what when.

And, how can we protect? Patch? Yes, that's necessary, but not sufficient. We have so many other protocols we rely on that are fragile & intercepted. Self-censorship sucks, and can't possible protect everything either. What to do?


This is not claiming to be a complete list of who knew what when. It is only a list of confirmed cases.


Really? If there are caveats or disclaimers, they are very well hidden.

For most people reading this actual article, I think they will come away with the impression that it's a complete account.


Was the article changed since you read it? The first three paragraphs make it clear that this is an attempted reconstruction of events and clarifications/corrections/ are requested.

>Ever since the "Heartbleed" flaw in encryption protocol OpenSSL was made public on April 7 in the US there have been various questions about who knew what and when.

>Fairfax Media has spoken to various people and groups involved and has compiled the below timeline.

>If you have further information or corrections - especially information about what occurred prior to March 21 at Google - please email the author: bgrubb@fairfaxmedia.com.au. Click here for his PGP key.


Sorry, that just still looks like glossing it over.

I mean, when the NSA & other actors do NOT submit their data to this guy, they can say, it's complete now?

It just stinks to me like it's complacency. Just change your passwords & patch, & then don't worry, share everything again, it's private. In 2014, I don't think we're safe & private anymore at all. I can take the downvotes. I don't like it either.


Heartbeats are not logged on a standard configuration, so if people other than the confirmed parties independently discovered the vulnerability, told nobody (or at least nobody who would tell anybody), and then exploited it on systems where heartbeats are not logged (which would be most of them), then how could anyone possibly know?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: