"Speaker presentations and materials are put online generally two weeks after the event. Audio and video are generally available 6-9 months after the conference." [1]
The paper mentions standard methods of attack, such as glitching voltage and/or clock.
Does anyone want to comment on how feasible it would be to defend against stuff like that in both hardware and software?
E.g. in software, instead of just storing an address in memory, store a tuple. Something like (address, ~address). Validate each tuple on use, i.e. (address ^ ~address) must result in all bits set. That's obviously a naive thing, but there are probably similar relatively low overhead things that can be done.
Same with hardware. It wouldn't be too difficult to store a parity bit to accompany each register byte. Any hardware glitching that flipped register bits would tend to result in parity errors. Parity checks are not very secure when considered individually, but collectively it would be very difficult to glitch the hardware without introducing massive numbers of parity errors.
In software it is very difficult to "protect" against. The goal with glitching is simply two switch a 1 to a 0 somewhere (conditional branch, memory store, etc). No mater how many extra if statements and bits you set, your not going to get all of them. And the attacker can just glitch twice.
Remember, your not causing the memory to go funky, your messing with the processors reads from registers and cache. So if the next instruction says "jump not equal 0" and the attacker wants "jump not equal 1" parity can't really help you in that scenario.
If you add parity, then the attack just needs to glitch that parity check out.
Hardware defenses are the best. In particular it looks like apple is using a "PLL". It has been a while since I worked with all this stuff, but I believe the PLL makes clock glitching impractical.
For voltage glitching, I'm sure they have components that monitor voltage and either smooth it out, or just shut the chip down if it sees something weird.
If you need to cause glitches in a few precisely chosen moments in time, while _not_ causing glitches in some other moments in time between the former, it gets a lot harder. It gets even harder if the response to a wrong sequence of glitches is to brick the processor / erase storage.
There are also known reasonably good defenses against glitches in PC that involve checking at every end of a basic block that a subset of instructions of this block has executed.
I wonder if any exploits have successfully used voltage or frequency glitching on a modern processor with DVFS (Dynamic Voltage and Frequency Scaling.)
Am I understanding correctly that while they enumerate a list of potentially useful attack vectors, there are no actual attacks (yet)?
Of course, since the Year of Snowden, I now assume that any "theoretical" attack vector has a Team, a Project Manager, and a half-completed Kanban board somewhere deep in the NSA…
If I'm reading the last few slides correctly, the tl;dr is something like:
- They haven't found any actual attacks.
- Because the secure enclave runs so little code, there's very little attack surface in software, and much of what there is (mainly message passing between the secure and normal environments) seems solid. The only likely possibility is the wrapper code around IMG4.
- However, if there is an exploit in IMG4, there aren't many mitigation techniques (stack canaries, ASLR) built in, so it would be likely to succeed (again, conditional on there actually being an exploit)
- Attacking hardware might be possible, but mainly on older devices, because the >= A8 chips have extensive protection against side-channel and power analysis attacks.
- One "game over" possibility would be blocking the "fuse signal" that tells the CPU that the secure enclave has been compromised. This would allow for replay attacks. However, this would require extremely capable hardware for both analyzing the chip lines and actually performing the attack. If it's possible at all, it would definitely be restricted to NSA-like scenarios.
They do conclude that the iPhone security features seem "light-years ahead of competitors" (their words), and coming from a Blackhat presentation, that actually means something.
Not just a Black Hat presentation, but from Azimuth, who are among the coolest of the cool kids. This is Mark Dowd and John McDonald's company. The people giving this talk are... not the "B" team.
The military has been building these kinds of secure embedded processors for a long time, they usually include physical / environmental protection packaging.
Wonder on the FIPS-140-2 level, where this chip would fit?
This seems like a lot of code to be running in a security-critical relatively simple device. Does anyone else have the impression that I would rather this device be much, much simpler.
Of course that might raise development costs but that seems like a fair trade off in this case, especially if it causes some "features" not to be implemented because they would be too hard.
What "lot of code"? It's running L4, which is perhaps the simplest operating system that still deserves the name.
A lot of the complexity they're documenting is in hardware: the AES-XEX memory encryption scheme that protect's SEP's memory from the AP (or any other component of the system), the fuse array that controls its settings, the memory filter that restricts AP reads/writes to the mailbox range.
Still more of the complexity is in the AP and in the AP's interfaces and drivers. That's real complexity, but it's outside the SEP's TCB.
I guess I would have expected it to not have an operating system. I understand that and operating system can provide security through isolation but when you are running small amounts of highly security critical code it seems like the isolation benefit would be outweighed by the extra attack surface.
Can you be more specific about the extra attack surface to which you're referring? What vantage point does an attacker need to have to target that attack surface?
Well most data is going through more layers of code. And as a rough approximation the more code running the more vulnerabilities. I guess this "internal" code isn't as critical of a surface because you have to get through the applications but there is certainly still risk.
I'm still not sure I follow. Can you outline a hypothetical in which there is a practical risk, so I know what you're talking about? Obviously, neither of us have all the technical details, so just propose something.
I didn't have any attack in particular, but an example could be sending a long buffer to the kernel that causes an integer overflow and overwrites some important memory. Especially with the non-verified external RAM it seems like you could throw some weird stuff at the kernel.
PDF viewers, browsers, web servers, office products, and mobiles are very common targets for exploit development.
However, most vulnerabilities in these applications are simply too valuable to announce to the world. They are generally purchased by offensive cyber operations companies or the government before they could be explained at e.g. Blackhat.
This is one reason it's crucial to have bug bounties. Apple can easily pay more than say, Endgame. If they bought exploits in the same fashion and at the same price point as the defense industry, their security posture would improve tremendously.
The comment surely meant a scenario where the presentation file itself, claiming to tell us about something else, happens to be a PDF exploit that attacks the audience of readers, not that the presentation is about such a vulnerability.
How feasible would it be to bombard the enclave with radiation, low enough to avoid any physical damage to the silicon, but high enough to cause random glitches in computation.
To flip bits, you'd generally want to use radiation with a high local energy deposition such as alpha particles or heavy ions. With typical photon radiation (X-Rays or gammas from radioactive decay) you'll mostly just degrade the silicon until it stops working reliably overall, and not change data.
As the secure enclave's surface is much smaller than the application processor's you'd also have to focus your protons/alphas/heavy ions very well, because otherwise you'd affect the logic outside of the secure element much more often.
I think this approach in practice is much, much harder than to just shine any random radioactive source on it.
If anyone is interested in a less technical, architecture overview of the Secure Enclave, I've written a blog series with the intent of reaching a more relaxed audience who is still concerned about their mobile security. The blog can be found here: https://woumn.wordpress.com/2016/05/02/security-principles-i...
The best part of working in a digital forensics research lab is that I get to read this kind of presentations without having to say "there goes my productivity for the day". Actually, whenever one of these comes around is when I get to be most productive :)
> Speaker presentations and materials are put online generally two weeks after the event. Audio and video are generally available 6-9 months after the conference.
Okay I'll leave something behind as well. This is my favorite sec-conference video of all time: [HOPE X] Elevator Hacking: From the Pit to the Penthouse https://www.youtube.com/watch?v=rOzrJjdZDRQ Closely followed by DEF CON 18 - Joseph McCray - You Spent All That Money and You Still Got Owned... https://www.youtube.com/watch?v=_SsUeWYoO1Y