What is your plan to hold companies to task then? Especially when you can't confirm that your suspicions are correct?
Responsible disclosure? Say that you disclose your concerns to them. How does that play out?
1. They respond to you. They say that it is not a security flaw. You just have to trust them that there is not security flaw.
2. They respond to you, and tell you that they will not fix it. You tell them that if they won't then you'll disclose it to the public. They try to claim that you are extorting them.
3. They don't respond to you. You disclose it to the public. Turns out that it wasn't a security flaw. You are sued for defamation/libel/etc.
4. They don't respond to you. You disclose it to the public. The company has egg on their face and fixes it.
I think that the biggest flaw in the system is that companies are not held to task for their security flaws. I realize that if all software had to be perfect it would cripple the industry, but at the same time there has to be some idea of criminal (or at least civilly liable) negligence for people/companies that don't at least follow best practices.
I don't think random strangers on the Internet conducting unauthorized testing are really making much of a difference either way, so the prospect of changing how much of that goes on doesn't really factor in for me.
What if you are not a random stranger but someone whose information they hold and may be improperly securing?
The best way may be to get another user to allow you to try entering their social security number to see what happens. I don't see that the company could have any issue in that case.
If you're asking, do I think people who do independent unauthorized security testing of applications to protect their own information make a big difference in the real world, my guess is "no".
My response was poor because I was responding to your post 'random internet user' and also your ancestor post about the right to do basic independent unauthorized security testing without being clear. I think I objected to the characterization of the people with something to lose as "random internet users" which I inferred from your posts and you may not have stated.
Having said that I have certainly read of a number of cases where a difference has been made although it may not be a big difference to the overall world.
And while manually fiddling a couple of URL parameters would seem to me a valid sanity check of the service you were using I don't think that would give you the right to run nmap against their servers looking for vulnerabilities or running an automated fuzzing of the URL parameters or crawling the returned results.
This does not mean that I think the crimes with which Weev was charged or the sentence is remotely appropriate. From what I have read he may deserve to be in jail (mostly for harassment, threats and blackmail) but that is what he should be charged with not this AT&T case. Given that he eventually handed the data over to a journalist I would give him a lighter sentence (if any and I was judge/jury) than I would give to AT&T (if it was in the UK and I was the Information Commisioner). I don't know of any data protection requirements in the US (for non-health data) so they may not actually have been criminal but they certainly were negligent.
First, that's not unauthorized testing. Bug bounty programs attract better, more talented testers, because they're compensated and (just as importantly) because they take much of the risk out of testing 3rd party services (a company that offers a bug bounty will have a hard time freaking out about bugs when they're reported).
Second, the companies that offer bug bounties tend to be ones that often spend well into 7-8 figures on security already.
Responsible disclosure? Say that you disclose your concerns to them. How does that play out?
1. They respond to you. They say that it is not a security flaw. You just have to trust them that there is not security flaw.
2. They respond to you, and tell you that they will not fix it. You tell them that if they won't then you'll disclose it to the public. They try to claim that you are extorting them.
3. They don't respond to you. You disclose it to the public. Turns out that it wasn't a security flaw. You are sued for defamation/libel/etc.
4. They don't respond to you. You disclose it to the public. The company has egg on their face and fixes it.
I think that the biggest flaw in the system is that companies are not held to task for their security flaws. I realize that if all software had to be perfect it would cripple the industry, but at the same time there has to be some idea of criminal (or at least civilly liable) negligence for people/companies that don't at least follow best practices.