HN2new | past | comments | ask | show | jobs | submitlogin
Apple Platform Security (Jan 2026) [pdf] (help.apple.com)
208 points by pieterr 1 day ago | hide | past | favorite | 177 comments




They made C memory safe? This is a big thing to gloss over in a single paragraph. Does anyone have extra details on this?

> On devices with iOS 14 and iPadOS 14 or later, Apple modified the C compiler toolchain used to build the iBoot bootloader to improve its security. The modified toolchain implements code designed to prevent memory- and type-safety issues that are typically encountered in C programs. For example, it helps prevent most vulnerabilities in the following classes:

> • Buffer overflows, by ensuring that all pointers carry bounds information that’s verified when accessing memory

> • Heap exploitation, by separating heap data from its metadata and accurately detecting error conditions such as double free errors

> • Type confusion, by ensuring that all pointers carry runtime type information that’s verified during pointer cast operations

> • Type confusion caused by use after free errors, by segregating all dynamic memory allocations by static type


>They made C memory safe?

They made a dialect of C with bounds safety, see:

https://clang.llvm.org/docs/BoundsSafety.html#overview


Many years ago. It’s called Firebloom. I think it’s similar in theory and lineage to Fil-C.

https://saaramar.github.io/iBoot_firebloom/


Sort of. From my understanding they’ve been heavily using clang with fbounds checks to insert checks into functions. I think there was work done to try to insert them into existing code as well. They memory tagging in new processors help avoid overflow exploitation. Maybe someone can jump in and add more details

Yes, that is however a dialect, and one of the goals to Swift Embedded roadmap is to replace it.

So they were not joking when they say they want Swift to replace from Assembly to Javascript.

I dont think this will end well.


It has been on Swift and Apple's official documentation since the early days.

People keep forgetting that Objective-C also had a full stack role on NeXTSTEP.

And the same full stack approach was also a thing on Xerox PARC systems, which mostly failed due to mismanagement.

Usually ends well for closed source platform vendors when developers aren't allowed to come up with alternatives like on FOSS operating systems.

At least, as long as the platform stays market relevant.


>People keep forgetting that Objective-C also had a full stack role on NeXTSTEP.

In terms of Apps and Low Level Stack Objective-C doesn't seems wrong in my book. The problem is Swift begin as a much larger language and evolve into a gigantic pile of a little of everything.


Doesn't seem to hinder C++, which modern C compilers are written with nowadays.

Despite all its complexity, LLVM and GCC aren't getting rewritten any time soon, or the OSes that rather use C++ subsets instead of being stuck with C.


Apple's commitment to privacy and security is really cool to see. It's also an amazing strategic play that they are uniquely in the position to take advantage of. Google and Meta can't commit to privacy because they need to show you ads, whereas Apple feels more like a hardware company to me.

modeless linked to this article earlier today:

https://james.darpinian.com/blog/apple-imessage-encryption/

My current understanding of the facts:

1. Google defaults to encrypted backups of messages, as well as e2e encryption of messages.

2. Apple defaults only to e2ee of messages, leaving a massive backdoor.

3. Closing that backdoor is possible for the consumer, by enabling ADP (advanced data protection) on your device. However, this makes no difference, since 99.9% of the people you communicate will not close the backdoor. Thus, the only way to live is to assume that all the messages you send via iMessage will always be accessible to Apple, no matter what you do.

It's not like overall I think Google is better for privacy than Apple, but this choice by Apple is really at odds with their supposed emphasis on privacy.


Enabling ADP breaks all kinds of things in Apple’s ecosystem subtly with incredibly arcane errors.

I was unable to use Apple Fitness+ on my TV due to it telling me my Watch couldn’t pair with the TV.

The problem went away when turning off ADP.

To turn off ADP required opening a support case with Apple which took three weeks to resolve, before this an attempt to turn off would just fail with no detailed error.

Other things like iCloud on the web were disabled with ADP on.

I just wanted encrypted backups, that was it.


That chimes roughly with my experience, but to be fair ADP is designed not just for encrypted backups, but to harden the ecosystem for people who may be under the greatest threat. Worth noting that it has been outlawed in the UK and cannot be enabled, which makes me think it's pretty decent

> Worth noting that it has been outlawed in the UK and cannot be enabled

For the record, there is an ongoing court battle between Apple and UK government about getting it overturned.

Which also says many positive things for Apple that they are willing to put their money where their mouth is and put up a fight.


And that’s a significant PR and marketing posture for Apple.

Apple's other emphasis is customer experience, and there are more "I forgot my code, help me recover my stuff" people than you can imagine.

It would be bad PR for Apple if everybody constantly kept losing their messages because they had no way to get back into their account.


You think there are fewer people who forget using Google devices? I don’t it. The article talks about how Google prevents that from happening.

That’s all fine, but then show the sender whether their connection is actually end to end encrypted, or whether all their messages end up in Apple’s effective control.

One might consider differently colored chat message bubbles… :)


ADP isn’t the default, and almost nobody who isn’t a journalist/activist/potential target turns it on, because of the serious (potentially destructive) consequences.

How does Google manage this, such every normie on earth isn’t freaking out?


> because of the serious (potentially destructive) consequences

Huh? What are you talking about? I don’t see anything destructive about it.


People don't always have enough Apple devices to justify confidence that they couldn't lose them all at the same time, which with ADP is a permanent death sentence if you don't have your recovery key.

(Apple says you can also use a device passcode; I'm not sure if this works if the device is lost. Maybe it does?)


I have 2 or 3 yubikeys associated with my account. I think apple does a decent job at communicating the importance of having recovery keys to the point where they deter those who can’t be bothered.

Yubikeys are great


Nobody expects their text messages to be backed up.

They get deleted and people shrug.


Or IOW, Googles solution affects only messages. Apple’s solution affects your whole digital life so the consequences are a lot more dire.

> Apple’s solution affects your whole digital life

I don’t know if that’s generally true. I could lose my apple account and not really give a a damn. Not that I see how such a thing would happen, save for apple burning down all their datacenters. I’m running ADP


Google’s solution also ensures that they know all the metadata of your messages, except the content of the message itself.

Apple too collects unencrypted metadata but now promises to reduce its scope.

How convenient...... indeed

I keep my messages and would like them to not go away.

Why?

Can someone explain what the real difference is to a consumer user between an iPhone and a Pixel or a Samsung device? Across all services, push notifications, and device backups.

Both promise security, Apple promises some degree of privacy. Google stores your encryption keys, and so does Apple unless you opt in for ADP.

Is it similar to Facebook Messenger (encrypted in transit and at rest but Meta can read it) and Telegram (keys owned by Telegram unless you start a private chat)?

There are things Pixels do that iPhones don’t, e.g., you get notified when a local cell tower picks your IMEI. I mean it’s meaningless since they all do it, but you can also enable a higher level of security to avoid 2G. Not sure it’s meaningful but it’s a nice to have.


Some of these companies don't make money from you, the end user, but by selling ads and data to more effectively deliver said ads.

Differences in capabilities, experience and implementation are all downstream from that. In other words, everyone pays lip service to privacy and security, but it's very difficult to believe that parties like Meta or Google are actually being honest with you. The incentives just aren't there.

With Apple, you get to fork over your wallet, but at least you seem the be primarily the user they've got to provide services to.

With Google/Meta, you're a sucker to bleed dry.


I think there’s also a topology chasm at play. Apple controls most of its hardware stack, with Qualcomm modems and Samsung displays, but the SoC is now Apple’s own. Google relies on rotating third parties to assemble the Pixels, hence poor QC. Samsung makes its own Exynos modems which they don’t dog-food and like Apple rely on Qualcomm instead, while Google still depends on Exynos.

Then there’s a big disparity across all Android hardware vendors. Google must cater to that more or less federated topology of Android devices. It’s much harder.

Yet I don’t see any technical blocker for an opt-in for an Apple-grade ADP in Pixels and Galaxies.

It’s all quite weird. Even with Google Passwords, how do I know that it’s E2EE if I can unlock it from a browser with just a device PIN? Lots of loopholes.


Addendum: this just in. Apple has much more to lose if they pull something like this; for meta, news like this... barely registers? At least I'm not surprised at all

https://www.theguardian.com/technology/2026/jan/31/us-author...


Apple, Samsung and Google all earn money from ads on your phone, just with different monetization pathways.

My understanding though is that the monetization pathways for Samsung and Google are 3rd party—Apple keeps your data to itself.

Apple sends your searches to Google for money. I would call search queries data?

I wonder how exactly Apple Intelligence works with ChatGPT and soon with Gemini. If I remember correctly, there’s no privacy there? If so, where’s the privacy boundary in Apple Intelligence?

Google pushes Gemini everywhere and wants to keep on to your interactions, with human reviews. While I applaud the transparency, having Gemini scrape my screen makes me uneasy. My frog’s not warm enough for that, yet.

And Gemini in Sheets and Docs is just a toy. Microsoft 365 Copilot is a step ahead but is wrong more often than not, at least from my interactions with them. Both very disappointing. No way to justify access to my personal or my company’s or clients’ information.

Apple promises something they call Secure Compute or so, don’t remember the exact name, which appears to be encrypted and randomized in their cloud compute, which is off-device. With iPhone being the most powerful to date (per GeekBench), Tensor Pixels will have to offload most of the edge compute to GCP, and Snapdragon Samsungs while being powerful (I have no idea but would assume) must follow the Pixel Android approach.

So AI features will exfiltrate even more personal information, occasionally, accidentally, or purposefully, and the user would have consented to that and the human reviews just to get access to the smart features.


> Apple sends your searches to Google for money. I would call search queries data?

Yawn. Changing your default search engine takes 5 seconds.


That’s true, though I wish Apple gave me the freedom to define a new search engine, beyond the small provided selection.

How many people do you think actually do it?

I think AdSense is still an Alphabet subsidiary?

> Apple promises some degree of privacy.

Apple also makes it easier to achieve that privacy:

    - They put all the privacy controls in one place in Settings so you can audit
    - App developers are mandated to publish what they collect when publishing apps to the App Store.

> - They put all the privacy controls in one place in Settings so you can audit

That’s true. On Pixel Android, there’s several unrelated places in the various settings for the device and for the Google account to take care of and see that they do not collide. And for every function there’s always some sort of small print like “it’s all private to you unless you choose to share” - but to use any of the features/services you have to “share” like with Google Photos and Calendar and Tasks, you lose track of what you share with whom in the end. So essentially not only the metadata is collected but also the content and nothing’s private as a result, at least that’s what I got to understand. And even if you ask Google to delete your personal information, it will retain it for a while for compliance purposes.

As for

> - App developers are mandated to publish what they collect when publishing apps to the App Store.

I believe that’s still moot and rather a voluntary disclosure that no one vets. I’ve seen apps with no collection stated on App Store but deviating privacy policies, or app functions that contradicted their own privacy policy.

From what I heard and read, I understood that as a well-meant idea but still a misconception on the consumer part due to lack of enforcement by Apple.


> From what I heard and read, I understood that as a well-meant idea but still a misconception on the consumer part due to lack of enforcement by Apple.

I'm not familiar with the detail so I cannot comment directly on what you are saying. I don't have the time to go read up on it right now.

But what I would say is that many aspects will be indirectly enforced by Apple (and can be audited/enforced by the user) through the privacy controls (location services, microphone, camera etc.). Clearly that does not cover everything, but it covers a large chunk.

Apple have also made it impossible to for example get a device-level ID, you can only get an app-level pseudo-device-id. So there are various code-level enforcements too.


Is it different with Google Play?

> Can someone explain what the real difference is to a consumer user between an iPhone and a Pixel or a Samsung device? Across all services, push notifications, and device backups.

By default, Apple offers you at no charge: email aliases, private relay, Ask No Track barrier. These are just the ones I can think of right now. I am sure there are more. A big thing with Apple is not that they offer different privacy services but they make it EASY and SEAMLESS to use. No other company comes close.


Aren’t they part of iCloud+ only? Ask no-track can arguably compromise your privacy by fingerprinting.

I agree that the privacy controls on Apple systems are well-organized.

Still, it’s more important to have confidence that the privacy services are not smoke and mirrors with carefully carved-out loopholes. It’s one thing to provide something and hold the competitor as the litmus test, the other to sustainably live up to your promises, like the now pejorative “do no evil” slogan, with retroactive ramifications. There’s really little users can effectively validate about Apple’s privacy promises.


I still like to encourage people to watch all of https://www.youtube.com/watch?v=BLGFriOKz6U&t=1993s for the details (from Apple’s head of Security Engineering and Architecture) about how iCloud is protected by HSMs, rate limits, etc. but especially the timelinked section. :)

I still recommend Mr. Fart's Favorite Colors as a refutation, describing why all of these precautions cannot protect you in a real-world security model: https://medium.com/@blakeross/mr-fart-s-favorite-colors-3177...

  Unbreakable phones are coming. We’ll have to decide who controls the cockpit: The captain? Or the cabin?

Krstić: “Here’s how we reduce the chance that even Apple can access or alter X, and here’s how we can make that credible.”

Ross: “Even if you make X cryptographically airtight, the real fight becomes political/physical coercion: ‘ship this or else.’”

Those can both be true at the same time.


I don't understand.

That article (written in 2016) says that Apple will build unbreakable phones in the future. Now is the future. So it seems to imply that Apple phones today are unbreakable.

Also, where does the article discuss "all of these protections"? (HSMs, rate limits, etc.)


> So it seems to imply that Apple phones today are unbreakable.

Indeed. If you don't control the "unbreakable" security though, then the lock is not for your benefit.

> where does the article discuss "all of these protections"?

You could read the danged article, it's pretty clear about the vulnerability of proprietary mitigations. I hate quoting spoilers verbatim but here you go:

  The sharper you get, the more important the work. But the more valuable the work, the craftier — and more determined — your adversaries. Every attack is more novel than the last. [...] By the time you land an engineering gig at Apple, you are a twitchy, tinfoily mess.

  And it is in this spirit that you develop one of the most secure systems the world has ever known. [...] So adversaries be damned: You finally win on the merits. But who said anything about meritocracy? During the champagne toast, Mr. Fart steps from behind the curtain and pulls the pistol of last resort:

  “Don’t ship this. Or else.”

It's all tempered by them ultimately controlling what you can put on your phone though.

As was demonstrated in LA, it's starting to have significant civil rights consequences.


What happened in LA?


I forgot about that and hadn't tied it to LA specifically in my head. Thanks for reminding me, really shitty thing that made me a lot more sympathetic to alternative app stores where I'd been against them before.

Security is pointless if platform allows 90% users to be social engineered into running code disabling that security

What's funny is you could read that statement as being an argument for or against walled gardens, depending on what kind of social engineering is being referred to.

The ability for people to do stupid things is the inescapable price of freedom. That does not make freedom not worth it.

Apple is an ad company now though


Their net profit was a little over $100 billion last fiscal year. They get $20 Billion+ in pure profit from Google being their default search engine.

That’s 20% of their profit


Google paying Apple to be the default search engine is not the same as Apple selling $20 billion worth of ads to track you.

Google isn’t just paying Apple $20 billion, it’s based on click throughs on ads in Safari. Apple is very much getting paid based on the ad economy.

But it still isn't Apple doing the tracking or receiving the data about your Google searches. They aren't Apple's ads, they're Google's ads.

How does that matter? Apple is still seeing 20% of its profits from ads and Google is still tracking you through Apple’s browser and Apple is getting paid for it.

> How does that matter?

Keeping in mind the context of the overall thread we're in, where the OP said this:

> Apple's commitment to privacy and security is really cool to see. It's also an amazing strategic play that they are uniquely in the position to take advantage of. Google and Meta can't commit to privacy because they need to show you ads, whereas Apple feels more like a hardware company to me.

And then further down somebody replies with this:

> Apple is an ad company now though

The implication was that, because Apple sells ads now, they must be tracking all of your personal data in the same way that Google does. And then that train of thought was further continued with the implication that, because Apple receives "20% of its profits from ads and Google" (lumping them both together), Apple ergo is receiving 20% of its profits through tracking all of your personal data. But it's not Apple tracking all of your personal data, it's Google tracking it, and they would track it whether they're the default search engine on iOS or not.

The distinction matters to me, and it's why I buy Apple products but not Google products.


They pay to be the default, not the only possible search provider.

They pay per click.

Again, they get paid a cut of Google's ad revenue from Safari users. This has one impact on Apple's design choices - Google remains the default search engine.

Notably, this hasn't stopped Apple from introducing multiple anti-tracking technologies into Safari which prevents Google from collecting information from Safari users.

If I open up a new tab in safari it tells me that in the last 30 days Safari prevented 109 trackers from profiling me and that 55% of the sites I use implement trackers. It also tells me that the most blocked tracker is googletagmanager.com across 78 websites


Yes it is.

Is this what you consider discourse? At least justify your position, don't shit out some drive-by popular opinion that I can't even begin to respond to.

The point is that Apple will make money any way that it can, including ads. That's why iOS privacy is worse than its competitors. You can't install an app without telling Apple because if you could, Apple wouldn't be able to monetize you as well. You can't get your location without also telling Apple because if you could, Apple wouldn't be able to build its location services as easily. No such problems on Android.

Apple sells some ads yes. But it’s a tiny fraction of their revenue.

Would Google or Meta go bankrupt if they stopped selling ads? Yes. Apple wouldn’t.


As long as you don’t count the $25 billion that Apple gets from Google.

I was wondering why Apple bought an identity clone patent that would wreck targeted ads and never used it. Maybe it’s a $25 billion insurance policy.

Are you suggesting that is what is keeping Apple afloat?

No. But guess how much their stock would drop if one quarter they lost 20% of their profit?

Apple would go bankrupt without US protectionist policy propping up their service revenue.

That's pretty bad. Maybe not "reliant on ad monopoly" bad, but pretty close.


That seems like a stretch. Even in Europe where people can choose to use different app stores, few people actually do. So few, in fact, that one of the alternative app stores recently shut down.

Have you considered that people just like Apple's products and services?


In their revenue report this week out of $140B, services made up 30B. 140B-30B = 110B. Thats pretty far from bankruptcy.

And that was just one quarter…

Run 12 quarters, for all I care. Service revenue accounts for more than 50% of Apple's YoY revenue growth: https://www.statista.com/chart/14629/apple-services-revenue/

Hardware sales aren't picking up the slack, and advertisement revenue is also following a growth trend. Apple's stock would indeed be cooked if they went balls-out against the government that guarantees them access to cheap hardware and software that has been declared illegally anti competitive by foreign sovereigns. Apple needs this.


Elaborate? Financial results say otherwise.

What does whether they’d go bankrupt or not have to do with whether they’re an ad company?

They sell third party ads: companies unaffiliated with Apple pay Apple to advertise on Apple platforms.

They’re an ad company. Just because it’s currently a small slice of their total revenue doesn’t make it untrue.


What matters is that the parent comment said “Apple is an ad company now,” as if that negated all the privacy and security stuff they do.

Making some cash on ads doesn’t have to rely on targeted tracking. That only matters if ads are an existential part of your business, and without huge ad revenue growth, your company is dead.


I guess it’s also a financial company, since they have a branded credit card?

I mean if you don’t care about details that’s fine I guess. Let’s call any company that sells and/or buys any amount of ads an "ad company". Let’s put them all into one bucket and judge. That’s super valuable.

[flagged]


All while slowly stuffing (more?) ads into their software.

In a lot of ways Apple is as aligned to data privacy the same way other "platforms" are: to gatekeep the user data behind their ad service. It's better than selling your data, maybe, but you're still being tracked and monitored.


The worst part is since Apple is technically not a 3rd party, many of the rules don’t apply to them even though they bring the same harm to the users. Did you notice the new “creative suite” has analytics with identities linked to your Apple account turned on by defend? Free Pages/Numbers is not so free anymore.

You can't sell cell phones and "not care about security". There are these things called government regulators that won't let you sell them anymore if security issues happen.

> Apple gives zero fucks about security.

Hyperbole doesn’t help your point. They definitely care about security, their profits depend on it.


That people fall for this corporate BS while Tim Cook is giving gold bars to Trump and dining and dancing with him When people are being murdered on the streets by ice is just amazing to me.

Well that’s what Americans voted for. So I don’t think anyone cares that every CEO (definitely not just Tim Cook) is schmoozing with Trump.

> Well that’s what Americans voted for.

Americans are not one person.

> So I don’t think anyone cares

Clearly they do.

> every CEO (definitely not just Tim Cook) is schmoozing with Trump.

Tim Cook was (supposedly) principled. I guess it's hard to pretend that you care about privacy or human rights while eating dinner next to bin Salman.


> Tim Cook was (supposedly) principled. I guess it's hard to pretend that you care about privacy or human rights while eating dinner next to bin Salman.

I guess if you thought he had principles then yeah that could be disappointing. Personally I've never tried to moralize corporations though, I just assume the only principle that every company and CEO operates by is whatever increases the stock price.


Funny that you think that people have free will under this zombie social media mind controlled Internet world we’re living in.

Besides Trump‘s approval ratings are worse than ever so I don’t think people really got what they wanted, they got who they voted for not what they voted for.


The Twinkie defense is alive and well, I see.

Yeah Americans voted for Trump. But that shouldn't prevent CEOs to show a spine. Tim Cook is no different from all the others, therefore Apple doesn't deserve any less contempt from us.

You know what's even cooler? Apple's commitment to hiding US federally-mandated backdoors for dragnet surveillance: https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

  Apple has since confirmed in a statement provided to Ars that the US federal government “prohibited” the company “from sharing any information,” but now that Wyden has outed the feds, Apple has updated its transparency reporting and will “detail these kinds of requests” in a separate section on push notifications in its next report.

Apple has ads. See the App Store, Apple Maps is also planning to roll out advertising.

Giving all your private data to Apple is not "privacy and security", https://hackernews.hn/item?id=39927657

I still like their hardware. But let’s not pretend that there is any part of Trump’s body that he won’t kiss and sell out his customers for. If Trump asked Cook to put a backdoor in iPhones or impose tariffs on Apple, Cook would do it in a minute

My Mother Night hope is that Cook publicly shows obsequiousness only so that in private he can hold the line on backdoors, etc.

I know, I'm living in a fantasy world in my head.


Cook couldn't personally put that backdoor in himself though. There would (presumably) be Apple employees who would blow the whistle if they received such a command.

In today’s market? You see how many tech workers have shut up about protesting every little thing inside large companies with all of the layoffs happening?

I have been cocky for 30 heads with the thought that I could always find a job quickly - and have even in 2023 (3 offers within 2 weeks) after being Amazoned and in 2024 (just replied to one recruiter the day after a layoff). But even I shut up and keep my head down these days. As long as we ain’t killing kids, I am not saying anything.


That's a good point. I'm lucky enough to be self employed, and I tell myself that if I were ever in the position to blow the whistle at a corporate or government job I'd do it in a heartbeat. But what we tell ourselves in our head and what we do when our family's livelihood is on the line aren't always the same.

I claim bs at this whole apple privacy thing, nothing but propaganda.

Two years ago I was locked out of my MacBook pro.

Then I just booted in some recovery mode and just..reset the password!?

Sure macos logged me off from (most) apps and website, but every single file was there unencrypted!

I swear people that keep boasting that whole apple privacy thing have absolutely no clue what they are talking about, nothing short of tech illiterate charlatans. But God the propaganda works.

And don't start me on iMessage.


You chose not to enable FileVault during setup. Probably because you were worried about being locked out and wanted an easy way to reset the password.

Would you prefer that Apple did not give you the option to disable the security feature you disabled during setup?


Ain't nobody paying attention, in any case it's still propaganda.

> Apple's commitment to privacy

We know now that it was all marketing talk. Apple didn’t like Meta so they spun a bunch of obstacles. Apple has and would use your data for ads, models and anything that keeps the shareholders happy. And we don’t know the half of the story where as a US corp, they’re technically obliged to share data from the not-E2EE iCloud syncs of every iPhone.


> Apple has and would use your data for ads, models and anything that keeps the shareholders happy.

Illegal to do this in (at least) the EU, California and China.


It sucks that Apple decided to monitize iPhone the way they have, by controlling the owners ability to install software of their choosing. Ignoring the arguments one could make about this making it "more secure" it's clearly disrespectful to the power user that doesn't want to beg Apple's permission to use their computer. I'll grant them their security claims are sound, but it's hard to take them serious regarding privacy arguments.

Our choices are either (A) an OS monitized by tracking user interaction and activity, or (B) monitized by owning the basic act of installing software on the device, both of these options suck and I struggle to give up the more open option for one that might be more secure.


Ignoring the arguments one could make about this making it "more secure" it's clearly disrespectful to the power user that doesn't want to beg Apple's permission to use their computer. I'll grant them their security claims are sound,

I wouldn't say they are sound. First, macOS provides the freedom to install your own applications (ok, they need to be signed and notarized if the quarantine attribute is set) and it's not the case that the Mac has mass malware infestations. Second, the App Store is full of scams, so App Store - safe, external - unsafe is a false dichotomy.

Apple uses these arguments, but of course the real reason is that they want to continue to keep 30% of every transaction made on an iPhone or iPad. This is why they have responded to the DMA with a lot of malicious compliance that makes it nearly impossible to run an alt-store financially.

(Despite my qualms about not being able to install apps outside the app store, I do think they are doing a lot of good work of making the platform more secure.)


The OP is about security and you specifically ignore security when bringing up a common flamewar topic for which much discussion has already been had on this site. Perhaps such discussion could at least be limited to articles where it is less tenuously related.

I guess I bring it up in the sense that no matter how good their security is, it still sucks that Apple products are so hostile to their owners. It's hard to be impressed by their security work with the platform being what it is.

Security, privacy, and ownership aren't equally separated in my mind.


You can request a downloadable a copy of any/all of the data that Apple has associated with your account at https://privacy.apple.com.

This apparently includes retrieving all photos from iCloud in chunks of specified size, which seems an infinitely better option than attempting to download them through the iCloud web interface which caps downloads to 1000 photos at a time at less than impressive download speeds.


But all the software is closed source, and there is little to no opportunity to verify all these security claims. You don't have the encryption keys, so effectively the data is not under your control.

If you want to see security done well (or at least better), see the GrapheneOS project.


GrapheneOS also doesn't give you the encryption keys. If you run the official version, there is no way for you to extract the data from your device at all beyond what app developers will let you access. This means that you do not own the data on your device. The backups are even less effective than Apple's, although they say they will work on it.

The developers also appear to believe that the apps have a right to inspect the trustworthiness of the user's device, by offering to support apps that would trust their keys [1], locking out users who maintain their freedom by building their own forks.

It's disheartening that a lot of security-minded people seem to be fixated on the "AOSP security model", without realizing or ignoring the fact that a lot of that security is aimed at protecting the apps from the users, not the other way around. App sandboxing is great, but I should still be able to see the app data, even if via an inconvenient method such as the adb shell.

1. https://grapheneos.org/articles/attestation-compatibility-gu...


> The developers also appear to believe that the apps have a right to inspect the trustworthiness of the user's device, by offering to support apps that would trust their keys [1], locking out users who maintain their freedom by building their own forks.

That is not a bad thing. The alternative is not having apps that do these checks available on the platform at all. It’s ridiculous that someone should expect that every fork of it should have that capability (because the average developer is not going to accept the keys of someone’s one off fork).

If there’s anyone to blame, it should be the app developers choosing to do that (benefits of attestation aside).

Attestation is also a security feature, which is one of the points of GOS. People are free to use any other distribution of Android if they take issue with it.

Obviously I could be wrong here, this is just the general sentiment that I get from reading GOS documentation and its developer’s comments.


> Attestation is also a security feature

I don't actually disagree with this. The auditor is a perfectly valid use of it. It's good to be able to verify cryptographically your device is running what it's supposed to.

The problem is when it transcends ownership boundaries and becomes a mechanism to exert control over things someone doesn't own, like your bank or government controlling your phone. It is one of the biggest threats to ownership worldwide.

Note also that getting "trusted" comes at the cost of other security features, such as spoofing your location securely to apps:

https://hackernews.hn/item?id=44685283


For some reason they don't release userdebug versions which was a dealbreaker for me.. (the device should be secure, but not against me)

But if you wish to build it from source, it could probably be a good option.


You can re-sign it using https://github.com/chenxiaolong/avbroot

I don't currently have any root on the phone, but I reserve the right to add it or run the userdebug build at a later date


We could use it to install magisk, but that wouldn't make the build proper "userdebug" one.

I fully agree with your original comment - AOSP security model is NOT a proper solution to the security problem, and I'd add to it that it was also designed to be anticompetitive - Google can do what third party apps can't.

Android architecture is tainted by Google's business model and it shouldn't be used as an example of a secure operating system..


You were not going to be able to use those apps anyways, so what does it matter to you? I, and I suspect many, agree with the purpose of attestation. The problems around it are strictly around establishing good ways to teach apps who they should trust, not around attestation itself. By putting your head in the sand, you'll never improve the situation.

> teach apps who they should trust

Ah, the apps^Wgovernment (look at that page, most of it is government IDs) should be able to discriminate against me for daring to assert control over my own device. And GrapheneOS is saying:

Hey government! We pinky promise to oppress the user just the same, but even more securely and competently than Google/Samsung!

> what does it matter to you

It shows that the developers maybe don't fully have your best interests at heart?


The way I look at it is that there is certain software that other entities aren't willing to let you run without assurances that it won't be tampered with. You don't necessarily have a right to be able to use that software if you cannot provide it suitable accomodations. It's your choice whether or not you want to run it or not, anything else is simply entitlement. This may seem annoying if it's your bank, but ultimately it's their choice to make. The current approach makes certain things painful, like trying to customize your os, but that's a problem worth solving rather than just ignoring. More software will start relying on this over time. At the end of the day trust is a hard problem to solve.

> It's your choice

Ah, classic false choice. Do you know it is illegal to do cash transactions over a certain amount in most Western countries now? In my mind, if I have a right to do something (buy a home), and there is only one approved way to do it, then I automatically have the right to use the approved way.

Similarly, having a government ID might technically be a choice now, but it won't be soon with all these age verification BS rolling out. So no, this is not entitlement. Your argument would work for anticheat in online games or DRM media, but not banks or government services.


Yes, how can we verify this? Who says three-letter agencies have no access?

We can't verify that the Pixel phones are safe. Nor can the GrapheneOS people, because they don't know everything that's running in the Google Tensor SoC, and they don't have the source code to the firmware running in the Samsung Exynos cellular modem.

Neither can we with Apple phones.

But we can go to a great length in verifying GNU/Linux phones with available schematics.

Glad there's still at least one tech company that cares about personal security / opsec.


That’s Dec 2024

Weird. On my end it says January 2026.

> Since 2012, Mac computers have implemented numerous technologies to protect DMA, resulting in the best and most comprehensive set of DMA protections on any PC.

Macs are PCs now? This coming directly from Apple is hilarious.


Given that A19 + M5 processors with MIE (EMTE) were only recently introduced, I wonder how extensively MacOS/iOS make use of the hardware features. Is it something that's going to take several years to see the benefit, or does MIE provide thorough protection today?

I was just watching a video on this yesterday: https://www.youtube.com/watch?v=5McB6-2r-ds

Apple’s implementation of MTE is relatively limited in scope compared to GrapheneOS (and even stock Android with advanced security enabled) as it’s hardware intensive and degrades performance. I imagine once things get fast enough we could see synchronous MTE enabled everywhere.

It is curious at the moment though that enabling something like Lockdown Mode doesn’t force MTE everywhere, which imo it should. I think the people who are willing to accept the compromises of enabling that would likely also be willing to tolerate the app crashes, worse performance etc that would come with globally enabled MTE.


I think all of the kernel allocators and most (?) system processes in iOS 26 have MIE enabled, as does libpas (the WebKit allocator), so it’s already doing quite a lot.

Sometime I wonder how much overhead all these security features take in terms of performance.

I would really like to see a benchmark with and without security measures.


It's not really possible to make a direct comparison, given that a big chunk of the features are baked into the silicon, or are architecture-level choices.

It’s technically possible, but it would be difficult and likely require breaching an NDA. A bit pedantic, perhaps, but it’s out there.

Apple makes available on a highly controlled basis iPhones which permit the user to disable “virtually all” of the security features. They’re available only to vetted security researchers who apply for one, often under some kind of sponsorship, and they’re designed to obviously announce what they are. For example they are engraved on the sides with “Confidential and Proprietary. Property of Apple”.

They’re loaned, not sold or given, remain Apple’s property, and are provided on a 12-month (optionally renewable) basis. You have to apply and be selected by Apple to receive one, and you have to agree to some (understandable but) onerous requirements laid out in an legal agreement.

I expect that if you were to interrogate these iPhones they would report that the CPU fuse state isn’t “Production” like the models that are sold.

They refer to these iPhones as Security Research Devices, or SRDs.


These devices still have all the security features.

The ones I remember most affecting performance were zeroing allocated memory and the Spectre/Meltdown fix. Also, the first launch of a new app is slow in order to check the signature. Whole disk encryption is pretty fast today, but probably is a bit slower than unencrypted. The original FileVault using disk images was even slower.

> Whole disk encryption is pretty fast today, but probably is a bit slower than unencrypted.

Isn’t whole disk encryption nowadays done in hardware on the storage controller?


It's not whole-disk encryption, it's file-level encryption which is better. (more security guarantees)

Zeroing allocated memory is complicated because it also has performance benefits, since it improves compressed swap.


262 pages!!! Pretty interesting to see how the different SoCs have evolved security wise over time.

Somehow, they conveniently forgot to mention these "security" features:

1. Constant popups about "application requesting access" on macOS. That often happens without any user's activity.

2. If you leave the permission popup open for some time (because it's on a different screen), it auto-denies. And then you won't be able to find ANY mention of it in the UI.

3. macOS developers can't be assed to fix mis-features, like inability to bind low ports to localhost without having root access (you can open any listening port on 0.0.0.0 but you can't open 127.0.0.1:80).


Protects the device well... against the owner of the device using it as they wish :)

[flagged]


What is "Google Messages"? I can't count the number of articles people have written over time about how many first-party messaging apps Google themselves have put out (and then put down), not to mention what messaging apps get shoveled on by third-party android integrators.

> the main reason a message wouldn't be properly end-to-end encrypted in Google's Messages app is when communicating with an iPhone user, because Apple has dragged their feet on implementing RCS features in iMessage

(or with any other android user who isn't using a first-party device / isn't using this one app)

> [...] Android's equivalent cloud backup service has been properly end-to-end encrypted by default for many years. Meaning that you don't need to convince the whole world to turn on an optional feature before your backups can be fully protected.

You make it out to seem that it's impossible for Google to read your cloud backups, but the article you link to [0] earlier in your post says that "this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor" (emphasis added). So they have your encrypted cloud backup, and the only way to get the key material to decrypt it is to get it from an HSM in their datacenter, every part of which and the access to which they control... sounds like it's not really any better than Apple, from what I'm reading here. Granted, that article is from 2018 and I certainly have not been keeping up on android things.

[0] https://security.googleblog.com/2018/10/google-and-android-h...


HSMs are designed to protect encryption keys from everyone including the manufacturer. Signal trusts them for their encryption features. It's the best security possible for E2EE backups with passcode recovery, and Apple does it too for the subset of data that they do real E2EE backups on, like Keychain passwords. Characterizing using an HSM to implement E2EE securely as "not any better than" just giving up on E2EE for messages backups is ridiculous.

The HSMs that Signal and Apple are using are on-device though. Yes you still have to trust Signal / Apple to not exfil your key matter once decrypted by the HSM, but I submit that that is materially better than having the HSMs be hosted in a datacenter.

Signal and Apple and Google all use HSMs in datacenters as well as on device.

You can enable Advanced Data Protection to address that issue with iMessages.

Giving users an option between both paths is usually best. Most users care a lot more that they can’t restore a usable backup of their messages than they do that their messages are unreadable by the company storing them.

I used to work at a company where our products were built around encryption. Users here on HN are not the norm. You can’t trust that most users will save recovery codes, encryption seed phrases, etc in a manner that will be both available and usable when they need them, and then they tend to care a lot less about the privacy properties that provides and a lot more that they no longer have their messages with {deceased spouse, best friend, business partner, etc}.


> Apple can still read any message you exchange with practically anyone through their iCloud backups, since they are overwhelmingly likely to have backups enabled and overwhelmingly unlikely to have proactively enabled the non-default "Advanced Data Protection" feature.

> They could have implemented iMessage to not backup messages from people who enabled ADP, but they didn't. They won't even inform you when your conversation partner has uploaded your messages to Apple's servers in a form that Apple can read.

> Android's equivalent cloud backup service has been properly end-to-end encrypted by default for many years. Meaning that you don't need to convince the whole world to turn on an optional feature before your backups can be fully protected.

> Apple's stated reason for not enabling end-to-end encryption on iCloud backups by default is that it would cause data loss when users lose their devices. But Google's implementation avoids this problem. Furthermore, Apple does do end-to-end encryption by default on other critical information that would be painful to lose, such as your account passwords stored in Keychain. So that excuse doesn't seem to hold water.


This is your blog post, so I'll ask you a question. What are you trying to state in Belief #1? The message is unclear to me with how it's worded:

  > In this table, in the "iCloud Backup (including device and Messages backup)" row, under "Standard data protection", 
  > the "Encryption" column reads "In transit & on server". Yes, this means that Apple can read all of your messages 
  > out of your iCloud backups.
In addition to the things you mentioned, there's certainly a possibility of Apple attaching a virtual "shadow" device to someone's Apple ID with something like a hide_from_customer type flag, so it would be invisible to the customer.

This shadow device would have it's own keys to read messages sent to your iCloud account. To my knowledge, there's nothing in the security model to prevent this.


This shadow device would have it's own keys to read messages sent to your iCloud account. To my knowledge, there's nothing in the security model to prevent this.

Matthew Green has some great posts about iMessage security. This one describes the key lookup issue:

https://blog.cryptographyengineering.com/2015/09/09/lets-tal...

Looking at the linked Apple Platform Security, it seems like the Apple Identity Service is still used as a public key directory.


The table has two categorizations: "In transit & on server" and "End-to-end". The former, which covers iCloud backups in the default configuration, is explicitly NOT end-to-end, meaning there are moments in time during processing where the data is not encrypted.

However, iCloud backups actually are listed as "End-to-end" if you turn on the new Advanced Data Protection feature.


Or Apple can also push an update, which you can't refuse, that upon first message to iCloud just uploads your private key. It's a bit foolish to count on encryption implemented by the adversary you're trying to hide from. Of course, this will most likely only affect individuals targeted by state-level actors.

IIRC Apple has attempted to implement some defences against this, for example by requiring the passcode to be inputted before an update can be installed to prevent another San Bernardino scenario. A cursory search indicates that they also have some kind of transparency log system for updates, but it seems to only apply to their cloud systems and not iOS updates.

No mention of Pegasus and other software of such sort. Can latest iOS still be infected?

There is no point creating such document if elephant in the room is not addressed.


Apple's head of SEAR (Security Engineering & Architecture) just gave the keynote at HEXACON, a conference attended by the companies who make Pegasus such as NSO Group.

That doesn't seem like avoiding the elephant in the room to me. It seems like very much acknowledging the issue and speaking on it head-on.

https://www.youtube.com/watch?v=Du8BbJg2Pj4


Pegasus isn't magic. It exploits security vulnerabilities just like everything else. Mitigating and fixing those vulnerabilities is a major part of this document.

Why? The obvious conclusion is that Apple is doing everything in its power to make the answer “no.”

You might as well enumerate all the viruses ever made on Windows, point to them, and then ask why Microsoft isn’t proving they’ve shut them all down yet in their documents.


That analogy misses the asymmetry in claims and power.

Microsoft does not sell Windows as a sealed, uncompromisable appliance. It assumes a hostile environment, acknowledges malware exists, and provides users and third parties with inspection, detection, and remediation tools. Compromise is part of the model.

Apple’s model is the opposite. iOS is explicitly marketed as secure because it forbids inspection, sideloading, and user control. The promise is not “we reduce risk”, it’s “this class of risk is structurally eliminated”. That makes omissions meaningful.

So when a document titled Apple Platform Security avoids acknowledging Pegasus-class attacks at all, it isn’t comparable to Microsoft not listing every Windows virus. These are not hypothetical threats. They are documented, deployed, and explicitly designed to bypass the very mechanisms Apple presents as definitive.

If Apple believes this class of attack is no longer viable, that’s worth stating. If it remains viable, that also matters, because users have no independent way to assess compromise. A vague notification that Apple “suspects” something, with no tooling or verification path, is not equivalent to a transparent security model.

The issue is not that Apple failed to enumerate exploits. It’s that the platform’s credibility rests on an absolute security narrative, while quietly excluding the one threat model that contradicts it. In other words Apple's model is good old security by obscurity.


I am not sure if you missed my earlier comment, but it's directly applicable to this point you've repeatedly made:

>If Apple believes this class of attack is no longer viable, that’s worth stating.

To say it more directly this time: they do explicitly speak to this class of attack in the keynote that I linked you to in my previous comment. It's a very interesting talk and I encourage you to watch it:

https://www.youtube.com/watch?v=Du8BbJg2Pj4


On some random YouTube video that is mostly consisting of waffle and meaningless information like "95% of issues are architecturally prevented by SPTM". It's a quite neat and round number. Come on dude.

[flagged]


It’s not “a weakness.” It’s many weaknesses chained together to make an exploit. Apple patches these as they are found. NSO then tries to find new ones to make new exploits.

Apple lists the security fixes in every update they release, so if you want to know what they’ve fixed, just read those. Known weaknesses get fixed. Software like Pegasus operates either by using known vulnerabilities on unpatched OSes, or using secret ones on up to date OSes. When those secret ones get discovered, they’re fixed.


don't worry, they set the allow_pegasus boolean to false

Apple did create a boolean for that. They call it lockdown mode.

> Lockdown Mode is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature. When Lockdown Mode is enabled, your device won’t function like it typically does. To reduce the attack surface that potentially could be exploited by highly targeted mercenary spyware, certain apps, websites, and features are strictly limited for security and some experiences might not be available at all.


If Pegasus can break the iOS security model, there’s no reason to think it politely respects Lockdown Mode. It’s basically an admission the model failed, with features turned off so users feel like they’re doing something about it.

Lockdown mode works by reducing the surface area of possible exploits. I don't think there's any failures here. Apple puts a lot of effort into resolving web-based exploits, but they can also prevent entire classes of exploits by just blocking you from opening any URL in iMessage. It's safer, but most users wouldn't accept that trade-off.

Claiming reduced attack surface without showing which exploit classes are actually eliminated is faith, not security.

And Lockdown Mode is usually enabled _after_ user suspects targeting.


If you did RTFA for this story, you’ll see on page 67 what I pasted with a link to the support article describing to end users exactly what’s blocked. It does greatly reduce the attack surface.

Wow, this is hardcore (pun intended).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: