HN2new | past | comments | ask | show | jobs | submitlogin

In a system that tries to be secure against server attacks, nobody cares about the server-side code, because we can't trust that it's the same exact code (i.e. without backdoors) anyways. Therefore, it would be possible to make assertions about how secure iMessage is with only client code.


To some extend the same problem applies to application code. As long as the hardware and platform are closed, we don't really know what is going on when the application is executed. Despite this, opening the application would be of course a step to the right direction. Not because it would guarantee that Apple is not doing bad things, but because it would help others spot problems that may be there by accident.


Yes, I agree with all of that.


Even if the server code is good, frequently they're hosted on servers that have other vulnerabilities anyway. Because we're not the sysop, we have no idea what they're running on and if it's patched properly and up to date. The only way to be safe is to become an expert in digital security, an expert in systems management, an expert in reading and mitigating security problems in open source code, an expert in securely deploying secure communications systems and host it yourself.

For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.


> Even if the server code is good, frequently they're hosted on servers that have other vulnerabilities anyway. Because we're not the sysop, we have no idea what they're running on and if it's patched properly and up to date. The only way to be safe is to become an expert in digital security, an expert in systems management, an expert in reading and mitigating security problems in open source code, an expert in securely deploying secure communications systems and host it yourself.

If the server is assumed to be compromised by the threat model, then none of this matters.

> For the rest of us, even those of us reasonably well versed in systems and application security, we kind of have to put our trust in someone and hope they don't fuck it up.

It is better to have to put your trust in any number of independent auditors than to have to put your trust in a single corporation.


Still, the other problems the parent mentioned still apply. Can you verify the build against the source? Can you verify the binary on your phone against the build?


No, of course not. I also can't build an iPhone from scratch or look at the hardware design or firmware. It's especially bad when you consider that Qualcomm baseband and the problems it has.

Hell, I can't even trust that with my desktop computer. Further, how do I know the light in front of me isn't fabricated and that I'm not a brain floating in fluids connected to a simulation?

Realistically, I believe Apple's intentions are in the right place. And I believe that, for the most part, iPhone backdoors are not a thing yet. Being able to look at the client code is not something I believe will happen, but I believe it would be good if it did because then the security of iMessage could be independently verified much easier. It is true that Apple could just lie about it and put different code up, but assuming their intentions are in the right place, it seems like a win-win for everyone.


You can't build an iPhone from scratch, but you can build one from parts. Of course, it would need the magic signature from Apple in order to make Touch ID work unless the home button and system board were bought together.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: