HN2new | past | comments | ask | show | jobs | submitlogin

No, because CFAA isn't a strict liability crime and the prosecution is required to prove intent.


Changing from:

  https://payroll.example.com/SSN=000-00-0000
to:

  https://payroll.example.com/SSN=000-00-0001
shows intent to gain access that you were obviously not authorized for, even if you immediately report what you find.

Wasn't it ruled that accessing stuff that you aren't authorized to that is publicly (within a company) accessible to all on a shared drive violates the CFAA because you are exceeding your authorization?


What if someone makes a typo entering their SSN in the form that leads to this page, are they also a hacker? By your definition, they would be.

If your data is on the internet and not secured, it is being scraped by robots constantly for a start, some of which might iterate counters, so some responsibility lies with those who maintain the website. There isn't a clear line like the threshold of a dwelling we can point to, because it's not always clear which urls are authorised for a user and which are not. Ultimately you're not going to stop the curious, and bots, from scraping the web, so if there are no access controls on your payroll you can expect data to leak, even if you come down hard on every single person you find accessing it without authorisation.

I think the emphasis here should be on intent, as shown by the data taken, and what was done with it, not on trying public urls. If someone shows intent to steal information by changing urls, then downloads the info, then uses it for identity theft or sells it on, that's clearly a crime, and unless they have mitigating circumstances, perhaps it deserves a fine or a very short jail sentence for serious cases. I do think the sentences today are excessive for this sort of activity.

If they simply access a URL as you propose above, I don't think you can show intent. Even if they access several urls, was their intent to explore, or to steal information, or did they just follow a bad set of links or make a mistake with their web crawler?


You've missed the point. Making a typo and accessing the page once with the wrong SSN using the submit buttons on the web page provided has absolutely no intent.

Using an automated script bypassing the webform to cycle through as many as possible clearly shows intent to access something you're not supposed to.

People don't just access urls at random, they will never type a url with query strings into the browser.

They click links or submit forms.

Someone pen testing a website will be deliberately circumventing those methods.

It's like running wireshark on a public unsecured network, there's likely no good reason for you to be doing it and you know what you're doing if you're running that tool.

That's intent.

Note: I'm personally very pleased that they're fighting this. Just wanted to clarify what they mean by intent.


"It's like running wireshark on a public unsecured network, there's likely no good reason for you to be doing it and you know what you're doing if you're running that tool."

I for one don't really see what the issue with that is.

If you plug into an internet gloryhole whose infrastructure you don't control or trust, well, that's on you.


Running wireshark only shows packets that are delivered to your network interfaces. If people didn't want you to have that data, why did they route it directly into your computer's network port?


Using tcpdump after setting your wireless card into promiscuous mode will store all packets going over the air nearby. So, wireshark can easily be used to view tye contents of traffic that was not routed to your machine.


People set up radios and broadcast data completely indiscriminately? I would argue that is like yelling a conversation and then being shocked that people might overhear you. (Also it only gets certain packets depending on what network you're joined to, what channel you're listening on, etc.)


I agree about the yelling part. But, to clarify, you don't have to join a network, and you can always scan channels. Although chances are that most people around (e.g., at a coffe shop or airport) are broadcasting on a specific channel.


I agree intent is important.

I disagree visiting a link, or links, is enough to give you intent, it's just not enough information, is too similar to normal web activity and would mean the potential criminalisation of all sorts of innocent activity.


The prosecution is required to prove intent, beyond a reasonable doubt, to a jury. It's not something you get mechanically.


What kind of intent do they have to prove? That he intended use this method to see if could access the information or that he intended to gather the information for other purposes?


They have to prove the criminal intent behind the CFAA. They have to prove that you knew you shouldn't have access to the data; that you in effect deliberately lied to the computer. They have to make their case on both mens rea and actus reus.

Not every criminal statute works that way (there are "strict liability crimes", like statury rape), but most do.


Does it? Maybe it shows intent to see what kind of fancy 403 page payroll.example.com is employing?

I think one of the reasons people (including me) have problems with penalizing GET parameter change is that they are obviously visible and trivial to change. They are a part of URL and pretty much designed to be modified by hand. Growing up on the Internet we learned that if we want to see e.g. the next page of the gallery or board, we don't have to look for "next" button. We just change /0/ to /1/, or ?start=100 to ?start=150 in the address bar and press ENTER. It's easier. It's quicker. It's more natural. I can't feel that there's anything wrong with changing a GET parameter. It doesn't register on an emotional level.

My personal feelings are that on the Internet you're supposed to use HTTP codes like 403 or 404 to mark places user is denied to access. IMO the space of legally accessible addresses for given user should be defined by a superspace of all URLs that return 200 Ok when that user tries to access them. If you screw up and serve sensitive data without proper access check, it should be your (legal) responsibility, not the person's who (accidentally or not) discovers this.


Just because you're used to it, and because you find it easy and natural doesn't mean it is or should be legal.

There are people who, upon seeing the good laid out in a shop, will find it easy and natural to just pick it up and walk out. They find it easy and natural. It's right there.


I'd analogise it to an open book in a public space. The information is there for you to see when you walk up to it, and you have to interact with it (turning the page) to see the other information. On being caught turning the pages on the book, you get yelled at and imprisoned, despite your contention that it's the book owner's fault for not making sure turning pages was prevented.


So what about looking at files in a hospital? Nothing wrong with that right? No-one can say anything bad about writing "Mr. Walsh had his testicles removed!" in the local paper. Right?!


This is the flaw with the analogy, as with all analogies that try to map meatspace to the internet. It would probably make sense to say that the book has been left in an apparently public space.


You'll get sued for disclosing the data to a newspaper, not for looking at them in the hospital.


Can we please refrain from trying to make analogies to stealing? Or really, any analogies to the physical world?


I was referring the "wondering if blocking tracking cookies" question. I don't have a problem with criminalizing attempts to pull up random people by their SSN.


  > I don't have a problem with criminalizing
  > attempts to pull up random people by their
  > SSN.
If I see that the URL contains my SSN, and want to investigate if they were stupid enough to have this as a security hole, what are my options?

You seem to say that if I pull up the page of someone else, I am immediately a criminal and need to go to jail.

Should I instead report the possible issue to the company? Will they actually take, "I see that SSN is in the URL and that might be security hole, but I don't know for sure because I haven't attempted to try it," seriously? Hopefully they would, but I find that hope to be way more optimistic than the 'real world' should get credit for.


You don't get to break into a bank because you want to see if you can exploit a security hole.

What you're talking about is more akin to wiggling the bank's door handle and then leaving. What weev did is break in, steal a bunch of documents, and then talk about selling them.


"Breaking in" suggests that there is access control (locks, doors, walls, etc) in place.

ATT admitted in court to publishing this data on the web. Emitting email addresses in response to ICCIDs was a specific feature they explicitly implemented to reduce the number of steps required to resubscribe to service, not a "security hole".

Your physical analogy is inappropriate, and serves to frame his actions as criminal when they are clearly not.

Please read the brief.


> "Breaking in" suggests that there is access control (locks, doors, walls, etc) in place.

No, it doesn't. See, this is what happens when you start talking about crimes on the internet when you really shouldn't be. If I leave all my doors and windows opened, or if I put a box of valuables in the middle of an empty lot that I own, it doesn't suddenly make it legal for people to steal from me.

AT&T leaving their doors and windows open does not suddenly authorize any ol' grody troll to walk in and take personal information.

Whether you like it or not, his crime will be made into a physical analogy.


I _mostly_ agree with you.

How about _this_ analogy?

AT&T left a box of valuables in the middle of a lot they own, and weev walked by and grabbed them. Problem is, they weren't AT&T's valuables, they were mine and yours and 100,000 other peoples who'd entrusted AT&T with them.

Now who's "the bad guy"? Who's the more culpable "criminal"? WHo would we be holding to account if it were a bank who'd piled up the cash from 100,000 people's savings accounts into a building with all its doors and windows open?

Sure, what weev did was wrong. I don't think it was the _only_ wrong done here, or possibly even the "worst" wrong.


I think that's fair. AT&T should be reprimanded for a serious lack of security — how much they should be reprimanded would be another topic for debate.

But it doesn't take away from weev's crime (both this one and his previous harassments.)


Sorry for your distorted reality. You're saying that everyone who accesses unsecured information on a badly secured server gets reprimanded. You're placing the onus of security on the user which makes your point pure BS.


Granted. But how do you propose that AT&T would ever be reprimanded without someone like weev?

I strongly believe that weev should have notified AT&T before Gawker. But if they were unresponsive, as often happens, what then?


On the other hand, when AT&T leaves its doors and windows open 'in the web' they get a free pass from the general public because the technical aspect is lost on them.

If a bank used someone's first and last name as the 'access control' to their money, sure someone breaking in and stealing things is wrong, but should the bank be punished for negligence? Probably. When companies have security breaches 'on a computer' why is this different? Why the free pass? Why is the person that 'broke in,' or that that pointed out the flaw without breaking in the bad guy? Why aren't the companies themselves held to task for creating shoddy controls, and not following best practices when it comes to computer security?

A better example to demonstrate what's going on to the public would be to have a web form that says "Enter your SSN#" and a submit button. People understand that. Changing the terms in the URL bar is voodoo to many people, and this unfortunately leads to the belief that someone exercised nefarious skills to pull off an attack.


Is more like if you wrote out your customers personal data in a book left nailed to a front door that opens onto a public street and then tried to criminalise anyone who looked at pages that weren't relevant to them.


This is somewhat reasonable, since people need to actually come onto your property to access the book. It's probably unreasonable to say that someone was trespassing because they walked up to your door.


No they don't, the door opens onto a public street and the book is nailed to the front of it. This hypothetical book can be read while standing on the sidewalk. Sorry for not being more clear.


sneak said: "Breaking in" suggests that there is access control (locks, doors, walls, etc) in place.

ceol said: No, it doesn't.

I say: Yes it does. Your house has walls and probably a picket fence too. Either one is a boundary. The keyword here is "boundary" and not "locks". Having people's info waiting behind a serial number is not a "boundary" but rather a key-value pair accessible from the public domain. Your house is not accessible from the public domain because you have boundaries. Your servers are accessible from the public domain because you specifically have to put them online and make them accessible. Once you make servers accessible from the public domain then it is your responsibility to safeguard the privacy of what you put there. Weev did not DDoS the servers or inject SQL into their code. He accessed public info. Similarly if you put public info about you on facebook then it is not a security breach if I go there and check it out.


Physical analogies do not work regarding the internet. What happened is like he was given an address, he drove to it in a van, and a screen showed him his email. Then, he extrapolated that the buildings in the block he went to would do something similar, so he drove around to them in a car labeled 'VAN' and they showed emails.



First, you don't actually have the right to conduct security testing on third party servers. Nor should you. Real application security testing is disruptive, and in authorized tests, companies often take pains to ensure that testers aren't exposed to real user data.

Second, if you poke around to confirm the security or insecurity of an application, immediately report results to the target, and comply with requests for information, you may be civilly liable for damages (which it's unlikely anyone would pursue, given the PR implications) but are probably not violating the CFAA even as it's written today (that is: badly).


What is your plan to hold companies to task then? Especially when you can't confirm that your suspicions are correct?

Responsible disclosure? Say that you disclose your concerns to them. How does that play out?

1. They respond to you. They say that it is not a security flaw. You just have to trust them that there is not security flaw.

2. They respond to you, and tell you that they will not fix it. You tell them that if they won't then you'll disclose it to the public. They try to claim that you are extorting them.

3. They don't respond to you. You disclose it to the public. Turns out that it wasn't a security flaw. You are sued for defamation/libel/etc.

4. They don't respond to you. You disclose it to the public. The company has egg on their face and fixes it.

I think that the biggest flaw in the system is that companies are not held to task for their security flaws. I realize that if all software had to be perfect it would cripple the industry, but at the same time there has to be some idea of criminal (or at least civilly liable) negligence for people/companies that don't at least follow best practices.


I don't think random strangers on the Internet conducting unauthorized testing are really making much of a difference either way, so the prospect of changing how much of that goes on doesn't really factor in for me.


What if you are not a random stranger but someone whose information they hold and may be improperly securing?

The best way may be to get another user to allow you to try entering their social security number to see what happens. I don't see that the company could have any issue in that case.


If you're asking, do I think people who do independent unauthorized security testing of applications to protect their own information make a big difference in the real world, my guess is "no".


My response was poor because I was responding to your post 'random internet user' and also your ancestor post about the right to do basic independent unauthorized security testing without being clear. I think I objected to the characterization of the people with something to lose as "random internet users" which I inferred from your posts and you may not have stated.

Having said that I have certainly read of a number of cases where a difference has been made although it may not be a big difference to the overall world.

And while manually fiddling a couple of URL parameters would seem to me a valid sanity check of the service you were using I don't think that would give you the right to run nmap against their servers looking for vulnerabilities or running an automated fuzzing of the URL parameters or crawling the returned results.

This does not mean that I think the crimes with which Weev was charged or the sentence is remotely appropriate. From what I have read he may deserve to be in jail (mostly for harassment, threats and blackmail) but that is what he should be charged with not this AT&T case. Given that he eventually handed the data over to a journalist I would give him a lighter sentence (if any and I was judge/jury) than I would give to AT&T (if it was in the UK and I was the Information Commisioner). I don't know of any data protection requirements in the US (for non-health data) so they may not actually have been criminal but they certainly were negligent.


Then what's the point of all the bug bounty programs out there?


First, that's not unauthorized testing. Bug bounty programs attract better, more talented testers, because they're compensated and (just as importantly) because they take much of the risk out of testing 3rd party services (a company that offers a bug bounty will have a hard time freaking out about bugs when they're reported).

Second, the companies that offer bug bounties tend to be ones that often spend well into 7-8 figures on security already.


If you can access your own record without authentication shouldn't that be good enough proof? Why do you need to access everyone else's information illegally to prove that your own is available without proper controls in place?


Could we not turn that objection around to say we'd like to see implementations that allow exposure of random people's payroll data by typo-ed/bit-rotted/guessed SSNs should be criminalized?

In my head, this is related to the "expectation of privacy in public" and "ubiquitous surveillance" arguments. If somebody makes available on the public internet, a system that reveals my payroll data "secured" only by an enumerable/guessable SSN - while an attacker exploiting that vulnerability is "in the wrong", so too in my opinion is the developer/management/company who deployed that system.

It seems to me that AT&T should be being held to account for their actions at least as much as weev is. If the data weev acquired was worth prosecuting over, then AT&T need to be considered culpable/negligent for it's exposure.


I have no problem with attaching penalties, perhaps even criminal ones, to negligence on the part of people deploying apps, too. I don't see why it has to be one or the other.

Incidentally: I have literally no opinion about the Auernheimer case, so don't read anything into these comments.


This could be the basis for a class action(?) lawsuit against AT&T for gross negligence in handling sensitive data.


This is absurd. If there's no access controls, but it's still a crime, we're going to have to determine if we can legally access all web addresses beforehand. But that's what 403 status codes are for.


The same thing happens in real life all the time. We're supposed to use our brains and make ethical decisions on our own, not simply rely completely on technical safeguards to clue us into proper behavior.


What he did was unethical. But the idea that connecting to a URL you changed on a hunch could ever be a felony is outrageous.

Imagine if it had been comics published online. Because the server refuses to interact with your Chrome browser, you tell the website you are an IE. Because the "next" button is small, you use keyboard shortcuts to change the URL and view the next one. Without realizing it, you view a handful that weren't released yet. That's now a felony with a court precedent.


Why does it have to be binary with all you guys? :P

I didn't say weev deserved a felony conviction. I said he dun goofed, as a counterpoint to what many here are saying, that because the API he accessed was unauthenticated, it meant he did nothing wrong. That argument's completely bogus as well, just as much as a 2 year prison sentence for this is bogus.


I don't think what he did is ethical and I would be happy to see him jailed for an actual crime.

But talking to a webserver isn't like entering a house. It's like making a phone call. "Hi.. my name is Firef--, I mean, Mobile Safari. Can I have your email?"

I think creating a precedent for prosecution when accessing a number of web pages after spoofing a header is far, far worse than making an example of a troll that exploited a loophole to grab information that he shouldn't have. When talking to a webserver, without a clear separation between public and private with something like an API key or username/password, the only possible convictions we should allow is over DoS and that is only if there is malicious intent.


> the only possible convictions we should allow is over DoS and that is only if there is malicious intent.

What's 'malicious intent'? Is it what the 'reasonable person' decides it is? If so I don't see how what you're proposing is significantly different from what I've been saying.

Likewise a DoS is not the worst possible thing you could do to a website with an unauthenticated API. Why do you carve open an exception for DoS but not for e.g. identity theft or doxxing?


Why do you think it's reasonable to make people take a guess (even an educated one) at the intent of the site operator?


The law is full of "reasonable person" tests like this.


Unfortunately, "reasonable person" varies with the times. In Nazi Germany, a "reasonable person" would have understood that the reason they lost WW1 was due to the Jews. /godwin

[I also take issue with usage of the term 'common sense' because it is so nebulous.]


If you want to re-litigate all of English Common Law, I've got no objection, but not much to contribute.


"Reasonable person" varying with the times is actually kind of the point though.

Laws exist to inform the actions of people, not computers. I think that's lost on people of our expertise sometimes when we start to seriously envision a world where there is no ambiguity whatsoever for a given action.

But we've already seen a world like that: It's called 'zero tolerance' just as we see at schools in the U.S. and it's been, on the whole, a disaster.

Anything other than zero tolerance or full tolerance leaves room for interpretation, no matter how much you try to pin it down. At least with 'reasonable person' tests we know that ahead of time.


While 'zero tolerance' is a disaster, I really don't like the law being too open-ended, because then I can never be certain how my actions will be interpreted in light of the law.


You're right, but unless you're both a lawyer and a genius then the cold hard facts are that you really can't ever be certain how your actions will be interpreted in light of the entire law.

I used to think this was an issue with the law, that we need to take out loopholes and corner cases. But in the process of specifying allowable and unallowable behavior you make the law so expansive that it can never be grokked.

By making the law simple, you make it fuzzy and now we're back into your problem.

I would blame the lawyers and legislators, but honestly I have extremely simple programs that I can't actually predict the behavior of, and the computer does exactly what I tell it to.

I don't say this to say that we shouldn't fix the law, only that I think at some point you (the royal you) have to come to grips with "c'est la vie" and just not worry as much. Either way you can't completely win, so why fret over what you can't control?


The problem is that criminals are free to harvest data thanks to insecure programming, while white-hat hackers are banned from discovering these vulnerabilities (hopefully) before they are exploited.


There is not in fact an arms race going on between unauthorized white hat hackers and criminals.


You are explicitly authorized by the specified policy that uses SSN to look up the data. If you aren't authorized, why is it sending you the data?

If you make no effort to authenticate requests, I find it very unreasonable to act like any requests are unauthorized.


So what about denial of service attacks going against just the public unauthenticated API?

Just because AT&T does a boneheaded security implementation for which they deserve sanction, does not entitle weev or anyone else to go beyond ethical boundaries in discovering (and in weev's case, abusing) that security lapse.


I think DoS is covered by clauses other than just authorized or unauthorized. You can't legally DoS people even if you are an authorized user.

> that security lapse.

I don't think you can call this a lapse. It's not like they had passwords but forgot to change them. They designed it without any security.


> They designed it without any security.

Is that the only criterion now? You'll only do the ethical thing if someone else remembered to bake in technical safeguards?


To some extent, I disagree. It can set a pretty high de facto bar especially for the independent researcher. Sure, eventually -- if the world and the court you land in are a fair and decent place -- you may be absolved. But you may spend a lot of time and money getting to that point.

Facing what appears to be frequently very if not overly aggressive government prosecution, and/or private prosecution (so far, civil -- although see e.g. privately driven criminal prosecutions in the U.K.) by very well funded, perhaps overwhelmingly funded legal teams motivated by parties who as often as not seem to want to bury any and all bad publicity while discouraging any efforts that might -- even when justifiably -- dig it up...

I guess I view the top level Internet IP address space as a public space. If you can't put onto and manage your resources on it in a responsible and secure fashion, you deserve what you get.

Going from memory, as I understand it, there was no "subverting an authentication system", here. He merely iterated a public parameter. Granted, he apparently stepped through a lot of iterations, but a script can do that, even inadvertently, as IIRC was alleged to have occurred in this instance.

Ultimately, he didn't sell or otherwise misuse the resulting data. My personal inclination would be to argue that at a minimum, benefit of the doubt should mitigate against a felony-level conviction.

Also personally, my own dealings with AT&T have left me with absolutely no sympathy for them. The SBC culture from which their senior management devolves I have found to be atrocious.

They should spend less time looking for scapegoats to fry and wave in front of the next person to find one of their shortcomings, and instead "man up" and fix their own processes and systems.

Finally, in many such cases, it does seem to be the individual who is finding these problems and therefore causing them to be fixed. As the holder of online accounts and data, I don't want to abandon that field to some combination of lackadaisical corporate process along with un-prosecutable malicious entities in Eastern Europe, China, or wherever.

I know you're the expert in this field. And I don't mean to disrespect your work nor your commitment to excellence. Nonetheless, my own not insignificant experience has shown me repeatedly and taught me how, absent independent pressure, entities often don't get around to fixing such problems and can actually create de facto strong internal disincentives to doing so. I've seen this, repeatedly and at many organizations including very large and successful multi-national firms, myself.

I've seen it from the inside, where I've had to take damage and career risks in order to get things addressed. Even as a well-meaning employee of an organization with such a problem, I worry what "doing the right thing" may cost me.

We are increasingly forced to rely on them -- banking records, medical records, etc., etc. The onus should be on them -- to get this right. If nothing else, I can argue that economically it is they who can afford the risk (that is, the responsibility and cost of pro-actively mitigating it). And that should be a factor that is considered when determining where the balance in the law rests.

One argument that I've seen made, is that European credit and debit cards have chip and pin because the European banks bore a greater risk for and cost of fraud. Economic incentives can be an important factor in creating and maintaining security. Criminal risks can be, as well, but perhaps in a different fashion than we are talking about for this case. For both the economic and the criminal liability, the weight needs to rest more heavily on the parties blatantly leaving personal data vulnerable.

Such entities seem to demand ever more of the resources society has at hand -- financial, legal, etc. They should bear the responsibility, along with this. If one guy and his laptop can catch them out, and particularly if he's not doing evil with the results, well, then, shame on them. Stop focusing so much on "the hacker".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: