HN2new | past | comments | ask | show | jobs | submitlogin

Or you can find and replace CA file/string with your own and then do mitm. There is virtually no way app developers can defend against such hacks on any platform owned by user (so, everywhere except on Apple devices).


Well, it turns into a reverse engineering arms race at that point. There's no way they can guarantee anything, but they can throw resources at obfuscating things more.

If you're the reverse engineer and I'm the app author, you find/replace my CA file. Then I respond (or anticipate!) by checksumming the file to detect tampering. Then you respond by find/replace on the checksum.

Then I obfuscate the checksum string. Then you respond by faking out the platform's checksum API so that it always returns true. Then I respond by computing a checksum that I know should fail and verifying that the checksum API isn't just always returning true. Then you respond by faking out the platform checksum API with a whitelist of blobs whose checksum it should lie about.

Then I respond by statically linking my own checksum verification code into my binary instead of calling the platform's. Then you respond by patching my binary to jump around the code. Then I respond by using code obfuscation techniques.

And on and on. Given enough time and resources, any implementation I create can be subverted. But if I'm a huge tech company, I can afford a lot of time and resources too, if I want to. I can't eliminate it, but maybe I can make it something that rarely happens.


But if I'm a huge tech company, I can afford a lot of time and resources too, if I want to. I can't eliminate it, but maybe I can make it something that rarely happens

The whole cracking scene can afford far more time and resources than any one tech company. All adding protections does is make a more valuable target, because crackers love a good challenge.

"There's always a crack in everything. It's how the light gets in."


Not really. A skilled reverse engineer does not find/replace any certificates, or anything like that. They just debug the app, stepping over all the code, instruction by instruction. There's no way you can beat that. I'm a malware reverse engineer - I work with this kind of stuff every day. And I'm pretty sure there's no 'game': once the binary is in my disassembler, it's over.


> But if I'm a huge tech company, I can afford a lot of time and resources too,

Can? Sure! Would?

Security is one of those things that most people say they care about, but they are really not willing to pay for. Big companies have resources, but they are also in the business of making money, so they will put most of those resources to work on features that produce a ROI. Security is a huge cost center, and even when taken seriously it will be pursued only to the degree that it addresses/mitigates risks enough to conduct business.


It's more annoying than you'd think if your adversary anticipates this and designs around it. Fragments of the CA can be used within the codebase, effectively making you need to pull everything apart and with security canaries and embedded interpreted scripts even if you're getting close you never know if you're still black and if you're still seeing the same thing as everyone else.


Would something like this work as a method to authenticate authentic clients?

http://www.cnn.com/TECH/computing/9908/20/aolbug.idg/index.h...


Paraphrasing the article, it would be for the server to use undefined behavior in the _authentic_ clients to determine that they were in fact authentic. In this case, a buffer overflow doesn't appear to crash the client, but lets the server know that it's talking to a legitimate client. That's quite clever.


On jailbroken devices such an effort would be about as trivial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: