I think we can all agree that keeping it secret for a reasonable amount of time while the company fixes it (unless there is active evidence of exploitation, or you have significant reason to believe there is) is strictly more ethical
Here are a bunch of tptacek comments on the topic:
--
"Responsible disclosure" is a term of art coined by a group of vendors and consultants with close ties to vendors. Baked into the term is the assumption that vendors have some kind of proprietary claim on research done by unrelated third parties --- that, having done work of their own volition and at their own expense, vuln researchers have an obligation to share it with vendors.
Many researchers do share and coordinate, as a courtesy to the whole community. But the idea that they're obliged to is a little disquieting.
If vendors want to ensure that they get some control over the release schedules on their flaws, they can do what Google does and pay a shitload of money to build internal teams that can outcompete commercial research teams. Large companies that haven't come close to doing that shouldn't get to throw terms like "responsible disclosure" around too freely.
--
"Responsible disclosure" is a marketing term. Linus may be wrong about the importance of security flaws relative to bugs, but that doesn't validate the self-aggrandizing omerta of security researchers.
Vendor "coordination" of security flaws often works to the detriment of users. For one thing, cliques like vendor-sec gossip and share findings with the "cool kids", ensuring that every interested party but the operators knows what's coming a week before the advisories are published. For another, it substitutes the judgement of people like you --- who, no offense, don't run real world systems or make real world risk assessments about real assets --- for the judgement of the people who are not like you, but who could potentially disable or work around vulnerable systems far in advance of "coordinated patches".
--
The process of "responsible disclosure" gives product managers latitude, because it effectively dictates that researchers can't publish until the vendor releases a fix. The vendors almost always decide when to release fixes.
When a researcher publishes immediately, vendors are forced to fix problems immediately. A small window of vulnerability is created ("small" relative to the half life of the vulnerability, which depends on all sorts of other things) where less-skilled attackers can exploit the problem against more hosts.
On the other hand, in the "responsible" scenario, many months will invariably pass before fixes to known problems are released. During that longer window, anybody else who finds the same problem (and, obviously, anyone who had it beforehand) can exploit the vulnerability as well.
Furthermore, full disclosure creates a norm in which vendors are forced to allocate more resources to fixing security problems, instead of waiting half a year or more. This costs vendors. But the alternative may cost everyone else more. It depends on how well-armed you think organized crime is.
Finally, there's the issue nobody ever seems willing to point out. If you disclose immediately, lots of people can protect themselves immediately: by uninstalling or disabling the affected software.
--
You are unlikely to find anyone in the "community of responsible security researchers" to say anything negative about Tavis Ormandy. It is way over the top to imply that he's not "halfways mature".
You will, on the other hand, find plenty of people with real reputations in the industry at stake (unlike yours, which is influenced not one whit by anything you say about disclosure) who will be happy to explain why "responsible disclosure" is damaging the industry. It's not even a hard argument to make. The dollar value of a reliable Windows remote is too high to pretend that bona fide researchers are the only people who will find them. Meanwhile, because product managers at large vendors are given the latitude to fix problems on the business' schedule instead of the Internet's, people get to wait 6-18 months for fixes to trivial problems.
Personally, without wading into "responsible" vs. "full" disclosure, I will point out that vulnerability research has made your systems more secure; the manner in which the vulnerabilities were uncovered has very little to do with it. You are more secure now because vendors and customers pay to have software tested before and after shipping it.
I think there are sufficiently many persuasive arguments that it's very difficult to claim that someone who informs the world of the existence of a bug is doing more harm than good, regardless of how that information is made public. And if they're doing more good than harm, it's probably ethical.
If you want to control disclosure of a bug why not offer large sums of money in exchange for an NDA? "Report it to us, keep quiet for 30 days and we'll pay you 100k." Would be a motivator few would turn down.
That's not a question easily answered though. Assuming complete objectivity, it would mean that you might save an order of magnitude more lives in the future discovering a cure for cancer, for example, at the expense of lives today. Then you're stuck with questions on how to quantify the value of life etc., but the base reality of the fact that you DO save more lives, and therefore, keep more families intact/happy doesn't change. Who matters more? People who died in the process? Or the future ones who will die as a result of inaction in the present?
Not right. Of course in practice that situation doesn't come up, because you can't reliably tell whether you'd save many more people by sacrificing people now.
Here are a bunch of tptacek comments on the topic:
--
"Responsible disclosure" is a term of art coined by a group of vendors and consultants with close ties to vendors. Baked into the term is the assumption that vendors have some kind of proprietary claim on research done by unrelated third parties --- that, having done work of their own volition and at their own expense, vuln researchers have an obligation to share it with vendors.
Many researchers do share and coordinate, as a courtesy to the whole community. But the idea that they're obliged to is a little disquieting.
If vendors want to ensure that they get some control over the release schedules on their flaws, they can do what Google does and pay a shitload of money to build internal teams that can outcompete commercial research teams. Large companies that haven't come close to doing that shouldn't get to throw terms like "responsible disclosure" around too freely.
--
"Responsible disclosure" is a marketing term. Linus may be wrong about the importance of security flaws relative to bugs, but that doesn't validate the self-aggrandizing omerta of security researchers.
Vendor "coordination" of security flaws often works to the detriment of users. For one thing, cliques like vendor-sec gossip and share findings with the "cool kids", ensuring that every interested party but the operators knows what's coming a week before the advisories are published. For another, it substitutes the judgement of people like you --- who, no offense, don't run real world systems or make real world risk assessments about real assets --- for the judgement of the people who are not like you, but who could potentially disable or work around vulnerable systems far in advance of "coordinated patches".
--
The process of "responsible disclosure" gives product managers latitude, because it effectively dictates that researchers can't publish until the vendor releases a fix. The vendors almost always decide when to release fixes. When a researcher publishes immediately, vendors are forced to fix problems immediately. A small window of vulnerability is created ("small" relative to the half life of the vulnerability, which depends on all sorts of other things) where less-skilled attackers can exploit the problem against more hosts.
On the other hand, in the "responsible" scenario, many months will invariably pass before fixes to known problems are released. During that longer window, anybody else who finds the same problem (and, obviously, anyone who had it beforehand) can exploit the vulnerability as well. Furthermore, full disclosure creates a norm in which vendors are forced to allocate more resources to fixing security problems, instead of waiting half a year or more. This costs vendors. But the alternative may cost everyone else more. It depends on how well-armed you think organized crime is.
Finally, there's the issue nobody ever seems willing to point out. If you disclose immediately, lots of people can protect themselves immediately: by uninstalling or disabling the affected software.
--
You are unlikely to find anyone in the "community of responsible security researchers" to say anything negative about Tavis Ormandy. It is way over the top to imply that he's not "halfways mature".
You will, on the other hand, find plenty of people with real reputations in the industry at stake (unlike yours, which is influenced not one whit by anything you say about disclosure) who will be happy to explain why "responsible disclosure" is damaging the industry. It's not even a hard argument to make. The dollar value of a reliable Windows remote is too high to pretend that bona fide researchers are the only people who will find them. Meanwhile, because product managers at large vendors are given the latitude to fix problems on the business' schedule instead of the Internet's, people get to wait 6-18 months for fixes to trivial problems.
Personally, without wading into "responsible" vs. "full" disclosure, I will point out that vulnerability research has made your systems more secure; the manner in which the vulnerabilities were uncovered has very little to do with it. You are more secure now because vendors and customers pay to have software tested before and after shipping it.
--end--
This exchange on why the term "responsible disclosure" is Orwellian is also worth reading: https://hackernews.hn/item?id=12308246
All of that is just a small portion of everything he's said: https://hn.algolia.com/?query=by:tptacek%20responsible%20dis...
I think there are sufficiently many persuasive arguments that it's very difficult to claim that someone who informs the world of the existence of a bug is doing more harm than good, regardless of how that information is made public. And if they're doing more good than harm, it's probably ethical.