They have a podcast together called Dithering which is pretty good (but not free) - they're friends.
I think John's article is better than Ben's, but they're both worth reading.
Ben takes the view that unencrypted cloud is the better tradeoff - I'm not sure I agree. I'd rather have my stuff e2ee in the cloud. If the legal requirements around CSAM are the blocker then Apple's approach may be a way to thread the needle to get the best of both worlds.
For me it's the worst of both worlds - e2ee has no meaning if the ends are permanently compromised - and there's no local vs cloud separation anymore which you can use to delineate what is under your own control - nothing's under your control.
Yeah, and as argued in one of the blog posts - that's just a policy decision - not a capability decision - malleable to authoritarian countries' requests.
Yes - and I agree that that's where the risk lies.
Though I'd argue the risk has kind of always lied there given companies can ship updates to phones. You could maybe argue it'd be harder to legally compel them to do so, but I'm not sure there's much to that.
The modern 'megacorp' centralized software and distribution we have is dependent on policy for the most part.
Yeah - the sense I got was he just liked the cleaner cut policy of a hard stop at the phone itself (and he was cool with the tradeoff of unencrypted content on the server).
It does have some advantages - it's easier to argue (see: the disaster that is most of the commentary on this issue).
It also could in theory be easier to argue in court. In the San Bernardino case - it's easier for Apple to decline to assist if assisting requires them to build functionality rather than just grant access.
If the hash detection functionality already exists and a government demands Apple use it for something other than CSAM it may be harder for them to refuse since they can no longer make the argument that they can't currently do it (and can't be compelled to build it).
That said - I think this is mostly just policy all the way down.
I have no idea if this feature existing makes it harder or easier for Apple to refuse. Based on how the feature works, it would still require a special build of iOS just like what the FBI wanted in order to remove the unlock count years ago.
Given the amount of nuance here, I also think it's important to differentiate between the FBI showing up and asking for something and government passing laws forcing encryption backdoors. The former is what Apple has fought to date b/c they can. The later is much harder to fight and Apple will most likely have to comply regardless of what features already exist or not (see China/iCloud). The later is also the most dangerous since politicians rarely understand technology enough to do something sensible. It remains to be seen, but Apple could be trying to get in front of long term law changes with an alternate solution.
I think the one thumbnail of the matching hash? Just to make sure there isn't a (they argue one in a trillion, but I don't know if I buy that) false positive.
That's if there is enough matches to trigger the threshold in the first place, otherwise nothing is sent (even if there are matches below that threshold).
Alternatively this is running on all unencrypted photos you have in iCloud and all matches are known immediately. Is that preferable?
I think the thumbnail is only when the threshold is passed and there's a hash match. The reason for that is an extra check to make sure there is no false positive match based on hatch match (they claim one trillion to one, but even ignoring that probably pretty rare and strictly better than everything unencrypted on iCloud anyway).
> Nope, E2EE without compromises is preferable.
Well that's not an option on offer and even that has real tradeoffs - it would result in less CSAM getting detected. Maybe you think that's the acceptable tradeoff, but unless government legislatures also think so it doesn't really matter.
This isn't the clipper chip, this is more about enabling more security and more encryption by default but still handling CSAM.
>Well that's not an option on offer and even that has real tradeoffs - it would result in less CSAM getting detected. Maybe you think that's the acceptable tradeoff, but unless government legislatures also think so it doesn't really matter.
It should and can be an option. Who cares what they offer us. Do it yourself.
I really don't understand how you're arguing as if you don't see the bigger picture. Is this is a subtle troll?
They are now scanning on the device. Regardless of how limited it is in its current capabilities, those capabilities are only prevented from being expanded by Apple's current policies. The policies enacted by the next incoming exec who isn't beholden to the promises of the previous can easily erode whatever 'guarantees' we've been given when they're being pressured for KPIs or impact or government requests or promotion season or whatever. This has happened time and again. It's been documented.
I really am at a loss how you can even attempt to be fair to Apple. This is a black and white issue. They need to keep scanning for crimes off our devices.
So to your answer your question, yes it is preferable to have them be able to scan all of the unencrypted photos on iCloud. We can encrypt things beforehand if need be. It is lunacy to have crime detecting software on the device in any fashion because it opens up the possibility for them to do more. The people in positions to ask for these things always want more information, more control. Always.
The above reads like conspiracy theory but over the past couple of decades it has been proven correct. It's honestly infuriating to see people defend what's going on in any way shape or form.
This is a policy issue in both cases - policy can change (for the worse) in both cases.
The comparison is about unencrypted photos in iCloud or this other method that reveals less user information by running some parts of it client side (only if iCloud photos are enabled) and could allow for e2e encryption on the server.
The argument of "but they could change it to be worse!" applies to any implementation and any policy. That's why the specifics matter imo. Apple controls the OS and distribution, governments control the legislation (which is hopefully correlated with the public interest). The existing 'megacorp' model doesn't have a non-policy defense to this kind of thing so it's always an argument about policy. In this specific implementation I think the policy is fine. That may not hold if they try to use it for something else (at which point it's worth fighting against whatever that bad policy is).
Apple's good solutions to the CSAM problem (which I think thread the needle for a decent compromise) could prevent worse policy from the government later (attempts to ban encryption or require key escrow like in the 90s).
This implementation as it stands reveals less information about end users and could allow them to enable e2ee for photos on their servers - that's a better outcome than the current state (imo).
1. Encrypt everything in the cloud but upload the hashes of these items as well on the device. Also notify us so we can notify law enforcement if they're doing some illegal stuff.
2. Everything is unencrypted in the cloud. No actions are taken on the device. No notifications to authorities.
With option one the sanctity of the device ownership is breached. With option two it's maintained. Maintaining that stark distinction is hugely important for what future actions can be taken in the public eye. Normalizing on device actions that work against the user must be fought at every instance they occur.
Your line of thinking dangerous because you're ignoring the public perception of a device you own actively working against you. Apple's behavior cannot be allowed to be considered normal.
To be fair - I think reasonable people can disagree on this.
I don't think it's a rationalization to point out that it only occurs when the same baseline conditions are met (using the cloud). I think those constraints/specifics matter. I wouldn't be in favor of the policy if they were different (and I'm not even sure I'm in favor of it now).
My personally preferred outcome would be e2ee by default for everything without any of this, but I also understand the concerns of NCMEC and the general tradeoffs/laws around this stuff (and future regulatory risk of CSAM) - and just the general issue of reducing child sexual abuse.
I am also in favour of E2E by default for everything without any device or cloud based scanning. However, Apple doesn't want to be caught in having developed a service that enables for child exploitation. Doing nothing may have even more invasive requirements legally forced by government, so Apple is stuck with a dilemma. Also lets not forget that Apple should also not want child exploitation to occur and therefore also should do something.
The question I have for drenvuk is how else is Apple able to prevent or detect child exploitation and the storage or distribution of content such as this on Apple's services?
> The end isn't really compromised with their described implementation.
They've turned your device into a dragnet for content the powers that be don't like. It could be anything. They're not telling you. And you're blindly trusting them to have your interests at heart, to never change their promise. You don't even know these people.
> "They've turned your device into a dragnet for content the powers that be don't like. It could be anything. They're not telling you"
They're pretty explicitly telling us what it's for and what it's not for.
> "And you're blindly trusting them to have your interests at heart, to never change their promise. You don't even know these people."
You should probably get to work building your own phone, along with your own fab, telecoms, networks, - basically the entire stack. There's trust and policy all over the place. In a society with rule of law we depend on it. You think your phone couldn't be owned if you were important enough to be targeted?
Yes. All the fabs and stuff we have now shoufd be devoted to implementing a surveillance state. This must happen. It cannot be any other way.
This is what you sound like. The problem here isn't the tech. It's that Big Tech has deluded society into believing privacy and personal ownership of devices doesn't exist because it would inconvenice Big Tech. Law enforcement echoes it because they were spoiled by the brief period that they tasted ClearNet, and they don't want to return to having to investigate the old fashioned way.
Every other major industry has increasingly started doing the same thing. It is not okay. We have no right to sell out future generation's privacy. It's cowardly, selfish, and does more harm to them in the long run.
I’m not making statements about the way things ought to be, I’m being pragmatic about the way things are.
We’re trusting a lot of the stack. Apple’s policy as described is reasonable. If you distrust it because of the things they could do, that same logic applies to the entire stack.
The nature of modern software distribution is that the majority of the stuff we use from centralized corporations is governed by policy, not technical capability or controls, and you don’t get to know the details.
You don’t own the OS you use, you don’t own the important parts of your phone.
This can be different. Decentralized applications via protocols are interesting (DeFi blockchain stuff like Audius or other apps on Ethereum). If the UX can get figured out.
Outside of decentralized protocols you’re ultimately just trusting policy somewhere. It seems dumb to me to arbitrarily be upset at Apple’s policy here, when the specifics are reasonable (and allow for e2ee).
Apple is performing warrantless searches, and somehow people see it as okay because Apple is not the government (even though it functions as a state agency in this regard).
> They're pretty explicitly telling us what it's for and what it's not for.
Nobody should blindly trust Apple. As an organization, they already love secrecy and shadows--what better place to sneak in and test this kind of feature, free from employee ethics and scrutiny?
They've been cooking this up without telling anyone, which is also indicative of how above board they are. Who knows what else they're doing with this now or will do in the future.
The CIA, FBI, MI6, Mossad, FSB, CCP, et al. will use this to learn more about their targets.
One logical conclusion of systems like this is that modifying your device in any "unauthorized" way becomes suspicious because you might be trying to evade CSAM detection. So much for jail-breaking and right to repair!
Even HN reporting / article linking / comments have been surprisingly low quality and seem to fulminate and declaim with surprisingly little interesting conversation and tons of super big assertions.
Linked articles and comments have said apple's brand is now destroyed, that apple is committing child porn felonies somehow with this (the logical jumps and twisting to get to these claims are very far from strong plausible interpretation).
How do you scan for CASM in an E2EE system is the basic question Apple seems to be trying to solve for.
I'd be more worried about the encrypted hash DB being unlockable - is it clear this DOES NOT have anything that could be recreated into an image? I'd actually prefer NOT to have E2EE and have apple scan stuff server side, and keep DB there.
> The laws related to CSAM are very explicit. 18 U.S. Code § 2252 states that knowingly transferring CSAM material is a felony. (The only exception, in 2258A, is when it is reported to NCMEC.) In this case, Apple has a very strong reason to believe they are transferring CSAM material, and they are sending it to Apple -- not NCMEC.
> It does not matter that Apple will then check it and forward it to NCMEC. 18 U.S.C. § 2258A is specific: the data can only be sent to NCMEC. (With 2258A, it is illegal for a service provider to turn over CP photos to the police or the FBI; you can only send it to NCMEC. Then NCMEC will contact the police or FBI.) What Apple has detailed is the intentional distribution (to Apple), collection (at Apple), and access (viewing at Apple) of material that they strongly have reason to believe is CSAM. As it was explained to me by my attorney, that is a felony.
Apple is going to commit child porn felonies according to US law this way. This claim seems actually quite irrefutable.
Ahh - an "irrefutable" claim that apple is committing child porn felonies.
This is sort of what I mean and a perfect example.
People imagine that apple hasn't talked to the actual folks in charge NCMEC.
People seem to imagine apple doesn't have lawyers?
People go to the most sensationalist least good faith conclusion.
Most mod systems at scale are using similar approaches. Facebook is doing 10's of MILLIONS of images to NCMEC, these get flagged by users and/or systems, and in most cases then facebook copies, checks through moderation queue and submits to NEC.
Reddit uses the sexualization of minors flags. In almost all cases, even though folks may have strong reasons to believe some of this flagged content is CSAM, it still gets a manual look. Once they know they act appropriately.
So the logic of this claim about apples late to party arrival of CSAM scanning is weird.
We are going to find out that instead of trying to charge apple with some kind of child porn charges, NCMEC and politicians are going to be THANKING apple, and may start requiring others with E2EE ideas to follow a similar approach.
Sorry under which of these other moderation regimes does the organisation in question transmit CSAM from a client device to their own servers? To my knowledge Apple is the only one doing so.
Facebook checks for potential CSAM when you upload from your client device (sometimes an iphone) to their servers or after it's on their system and a user flags it.
Instagram also check once you upload.
These are all transmissions.
Apple checks if you upload. If you don't upload or attempt to upload to their servers, no check.
All these are to flag potential CSAM. Some do more - nudity in general, harmful content filters etc. Some is auto blocked, some is forwarded for review and report etc.
In almost all cases flagging is part of or connected to uploading to a third parties servers. The flagging for CSAM is not a conviction - some do and some don't do a human review before submission. Most situations where folks can use flagging to hide content get a human review at some level to avoid abuse of the flagging system itself.
> Facebook checks for potential CSAM when you upload from your client device (sometimes an iphone) to their servers or after it's on their system and a user flags it.
Thats not the issue according to the linked source.
Instagram transmits all photos and assumes it's not CSAM until flagged - thats safe content. Apple transmits ONLY CSAM - thats a no-no content because they're assuming it is CSAM and you can't transmit CSAM.
You can't knowingly transmit CSAM. Transmitting a photo pre-scan is safe (if you assume any photo is not csam), transmitting post-scan is dangerous if you filter for csam.
Apple uploads all photos to iCloud photos (just as google does). This includes CSAM and not CSAM.
It keeps a counter going of how much stuff is getting flagged as possible CSAM. They don't even get an alert about anything until you hit some thresholds etc. And no one has reviewed anything yet at all, the system is flagging things up as other systems do.
Are you sure (legally) that one can't review flags from a moderation system? That is routine currently. No one is knowingly doing anything. Their system discusses being alerted to possible counts of CSAM.
Is your goal that apple go straight to child porn reports automatically with no human involvement at all? At scale (billions of photos) that's going to be a fair number of false positives with potentially very serious consequences.
The current approach is that images are flagged in various ways, folks look at them (yes, a large number have problems), then next steps are taken. But the flags are treated as possible CSAM.
Please look into all the false positives in youtube screening before you jump from a flag => sure thing. These databases and systems are not perfect.
I'm not a lawyer and i want apple to do nothing especially not scan my device.
I'm saying the linked article in discussion says you can't transmit content you KNOW (or suspect) is CSAM. You don't assume that all your customers' content is CSAM, but post-scan, you should assume.
The only legal way to transmit (according to article) is if it's to the government authorities.
I don't know the legal view on the "false positive" suspicion vs legality of transmitting. That's a gamble it seems. I don't have a further opinion on it since IANAL and this is very legal grey area.
Apple is very clear that they don't know anything when photos are uploaded. The system does not even begin to tell them that some may have CSAM until it's had like 30 or so matches. The jump from this type of system (variations are used by everyone else) to some kind of child porn charges is such a reach it's really mind boggling. Especially since the very administrative entities involved are supporting it.
A strong claim (apple committing CSAM felonies) should be supported by reasonably strong support.
Here we have a blog post where they've talked to someone ELSE who (anonymously) has reached some type of legal conclusion. If you follow the QAnon claims in this area (there are lots) they follow a somewhat similar approach - someone heard from someone that something someone did is X crime. It's a weak basis for these legal conclusions.
Apple is attaching a ticket to images as the user uploads to iCloud. If enough of these tickets think CSAM and allow an unlock key to be built, they will unlock and get checked. It's still the user who has turned on iCloud and uploaded the images.
The one odd thing I don't get. It would be a lot EASIER to just scan everything when its in the cloud itself.
Why go to this trouble to avoid looking at users photos in the cloud, set these thresholds etc. You'd only need to scan on device if for some reason you blocked your own ability to scan in cloud (ie, for E2E photos - which I don't think users actually want).
Something like all of iCloud getting E2EE would be a big feature and likely only be announced at an event. I agree, if the CSAM on device scanning isn’t followed on by something else, it seems like a lot of work and PR flak for little gain.
Right - the system is actually quite complex to blind apple to something they currently have sitting with their own keys on it on their own servers. I mean, they can (and maybe will) just scan directly?
That’s amusing, but your source is completely wrong about (1) What 18 USC § 2252 says in general (notably, it leaves out the “knowingly” requirement, which is critical given the wait being given to post-auto-flag, pre-verification transfer), (2) What exceptions are in § 2252, and (3) the entire reference to § 2258A, which is a separate reporting requitement, not an exception to § 2252. Really, one should read all of the chapter those sections are part of, but the whole argument is based on either fantasy or distortion of the text.
Apple transfers the images to iCloud, yes, but before the threshold of flagged photos is reached, Apple doesn't know that there might be CSAM material among them. When the threshold is exceeded and Apple learns about the potential of CSAM, the images have been transferred already. But then Apple does not transfer them any further, and has a human review not the images themselves, but a "visual derivative" that was in a secure envelope (that can by construction only be unlocked once the threshold is exceeded).
It’s good to know that if I use the “section” glyph in an otherwise completely false analysis born of extending my experience past where I’m competent, readers will quickly repeat whatever it is I said as factual and “quite irrefutable”. The power of one blog post is quite something. You’re the third person I’ve seen quote that very post and go “welp, sure is a smoking gun” despite it being completely, utterly, demonstrably false. It’s make believe. That entire blog post is so incorrect that it’s barely useful as toilet paper, friend.
If the author were correct Facebook and others would have been charged with felonies already. Verification happens in good faith and requires transmission despite the exact letter of the law. The people
doing it regularly talk to NCMEC. There are evidentiary and chain of custody procedures to follow that also transmit the material. Guess we should charge computers with felonies!
It’s also fucking hysterical that the armchairs who didn’t know what CSAM stood for a week ago think that one of the most litigiously sensitive systems ever built by a corporation somehow missed criminal legal liability. You know, it’s pretty easy to overlook criminal liability when you’re building a system that is used to establish criminal liability. Totally passes the smell test. Irrefutable indeed.
You guys are off your rockers and should move on to another topic. Seriously. Every day this is discussed is another harsh reminder of (a) how few fucks the industry gives about abuse of children and (b) how everyone here digests blogs and considers themselves authoritative on horrors they’ve never once experienced. Every day this is argued on HN, particularly last night when someone said the privacy situation is “the actual abuse” and “much worse” than the rape of children, is another day I’m ashamed to have chosen this profession and work alongside you. This community welcomes the people who have built surveillance systems for every consumer activity imaginable and kids getting molested is where you draw the privacy line, huh? Can’t look for that? I’m genuinely out. Keep your industry. I’d rather manage a Taco Bell if this is the perspective of this industry, because brother, I’ve seen what they’re trying to tackle and I’m still in therapy.
There is a hypocrisy at the core of the “privacy” argument here that is fundamentally indicting not only this community, but everyone discussing this up to and including EFF. I’ve never been so disappointed in people who I used to look up to and think of as good, smart folks.
Source: I’ve built CSAM handling systems for a FAANG and written the policy for using it. The author has misinterpreted the very law he cites and overlooked two subparagraphs. But that isn’t what you want to hear.
I'm a parent, it's also crazy to me how THIS of all things is where folks are going crazy over privacy. Literally, they will build something to track your every mouse move, scan every photo, log and sell all your browsing history and TV watching history (including big ISPs and mfgs).
And this is the thing folks get outraged about? Apple can already scan your stuff on their servers (and should!).
Instead of apple being charged criminally, other companies that do any kind of E2E without this may be required to do something like this. That's my prediction.
We will see if these "irrefutable" claims amount to anything like a child porn charge against apple.
Most who really care about the central hypocrisy have been banging our heads bloody about all those other surveillance issues. We're so damned concerned, because this is a textbook case of cramming through another brick in the wall of deconstructed privacy now and into the future. There is plenty of terrible arguments, but the good ones that people keep trying to handwave or ignore are the more pressing one.
> I'm a parent [...]
> Apple can already scan your stuff on their servers (and should!).
It's crazy to me how you fail to see that instead of everyone playing cyber cowboys and child predator indians the focus for preventing child abuse should be in real life. Criminalizing content does absolutely nothing for those kids who are abused by someone close to them. And unfortunately the vast majority (~75%) of cases happen like that.
If the stats are even remotely right it's unfathomable how bad prevention and deterrence is. (Over the course of their lifetime, 28% of U.S. youth ages 14 to 17 had been sexually victimized -- https://victimsofcrime.org/child-sexual-abuse-statistics/ )
This shows how bad our aggregated priorities are. War on X so far only made X worse. ¯\_(ツ)_/¯
Actually - sharing and having the photos out there is further and active victimization of the victims and does bother them. So shutting that down is a good use of resources.
Do they get a nice feel good card from NCMEC that states that they have managed to delete this many copies that year? /s
I don't doubt for a minute that victims continue to suffer from the knowledge that there's a lot of traumatizing content on the Internet, and that various services integrating with clearinghouses help.
I also don't doubt that Apple is at least semi-competent and could pull this off in an okay-ish way. But all the goodwill and clout that these megacorps have would have been better spent on advocating for policies that prevent child abuse. (Neglect, physical and sexual.)
Don't you think that telling people now that there will be a "check" at Apple before things get reported to NCMEC could be a PR lie to keep people calm?
They can easily say afterwards that they're "frankly" required to directly report any suspicion to enforcement agencies because "that's the law", and they didn't know because that was an oversight?
That would be just an usual PR strategy to "sell" something people don't like: Selling it in small batches works best. (If the batches are small enough people often don't even realize how the whole picture looks. Salami tactics are tried tool for something like that; used for example in politics day to day).
IMHO, what Apple is doing is not _knowingly_ transferring CSAM material. Very strong reason to believe is not the same as knowing. Of course it's up to courts to decide and IANAL.
Apple isn't looking at the actual image, but a derivative. Presumably their lawyers think this will be sufficient to shield them from accusations of possessing child porn.
Love 'em or hate 'em, it is hard to believe Apple's lawyers haven't very carefully figured out what kind of derivative image will be useful to catch false positives but not also itself illegal CP. I assume they have in fact had detailed conversations on this exact issue with NCMEC.
Firstly, the NCMEC doesn't make the the laws. They can't therefore give any exceptional allowance to Apple.
Secondly, any derivatives that are clear enough to enable a definitive judgment whether something's CP or not by an Apple employee would be subject to my argument above. Also just collecting such material is an felony.
I don't see any way around that. Only that promising some checks before stuff gets reported for real is just a PR move to smoothen the first wave of pushback. PR promises aren't truly binding…
Another reminder that many parts of HN have their own biases; they're just different than the biases found on other networks.
Instead of exclusively focusing on the authoritarian slippery slope like it's inevitable, it's worth wondering first: why do the major tech companies show no intention of giving up the server-side PhotoDNA scanning that has already existed for over a decade? CSAM is still considered illegal by half of all the countries in the entire world, for reasons many consider justifiable.
The point of all the detection is so that Apple isn't found liable for hosting CSAM and consequently implicated with financial and legal consequences themselves. And beyond just the realm of law, it's reputational suicide to be denounced as a "safe haven for pedophiles" if it's not possible for law enforcement to tell if CSAM is being stored on third-party servers. Apple was not the best actor to look towards if absolute privacy was one's goal to begin with, because the requests of law enforcement are both reasonable enough to the public and intertwined with regulation from the higher powers anyway. It's the nature of public sentiment surrounding this issue.
Because a third party insisting that user-hosted content is completely impervious to outside actors also means that it is possible for users to hide CSAM from law enforcement using the same service, thus making the service criminally liable for damages under many legal jurisdictions, I was surprised that this debate didn't happen earlier (to the extent it's taking place, at least). The two principles seem fundamentally incompatible.
The encryption on the hash DB has very little to do with recreating images. It is pretty trivial to make sure that it is mathematical impossible to do (just not enough bytes, and hash collisions means there is an infinitive large number of false positives).
My own guess is that the encryption is there so that people won't have access to an up-to-date database to test against. People who want to intentionally create false positive could abuse it, and sites that distribute images could alter images to automatic bypass the check. There is also always the "risk" that some security research may look at the database and find false positives from the original source and make bad press, as they have done with block lists (who can forget the bonsai tree website that got classified as child porn).
It seems like the Gruber article follows a common formula for justifying controversial approaches. First, "most of what you hear is junk", then "here's a bunch of technical points everyone gets wrong"(but where the wrongness might not change the basic situation), then go over the non-controversial and then finally go to the controversial parts and give the standard "think of the children" explanation. But if you've cleared away all other discussion of the situation, you might make these apologistics sound like new insight.
Is Apple "scanning people's photos"? Basically yes? They're doing it with signatures but that's how any mass surveillance would work. They promise to do this only with CSAM but they previously promised to not scan your phone's data at all.
But some of those technical points are important. Parent comment was concerned that photos of their own kids will get them in trouble - it appears the system was designed to explicitly to prevent that.
The Daring Fireball article actually is a little deceptive here. It goes over a bunch of that won't get parents in trouble and gives a further couched justification of the finger printing example.
The question is whether an ordinary baby photo is likely to collide with the one of the CSAM hashes Apple will be scanning for. I don't think Apple can give a definite no here (Edit: how could give a guarantee that a system that finds any disguised/distorted CSAM won't tag a random baby picture with a similar appearance. And given such collision, the picture might be looked at by Apple and maybe law enforcement).
Separately, Apple does promise only to scan things going to iCloud for now. But their credibility no long appears high given they're suddenly scanning users' photos on the users' own machines.
> how could give a guarantee that a system that finds any disguised/distorted CSAM won't tag a random baby picture with a similar appearance.
Cannot guarantee, but by choosing a sufficiently high threshold, you can make the probability of that happening arbitrarily small. And then you have human review.
> And given such collision, the picture might be looked at by Apple and maybe law enforcement
Do you have any idea that means? Because I certainly don't - how could you possibly identify whether an image is CSAM without looking at something which is reasonably the same image?
What is a visual derivative? Take that algorithm and run it over some normal images and show me what they look like.
All of this is being aggressively talked around because everyone knows it's not going to stand up to any reasonable scrutiny (i.e. plenty of big image datasets out there - does Apple's implementation flag on any of those? Who knows - they're not going to refer to anything specific about how they got "1 in a trillion").
No, I don't know what that means. Presumably it is some sort of thumbnail, maybe color inverted or something.
> does Apple's implementation flag on any of those? Who knows - they're not going to refer to anything specific about how they got "1 in a trillion"
I assume they've tested NeuralHash on big datasets of innocuous pictures, and gotten some sort of bound on the probability of false positives p, and then chosen N such that p^N << 10^-12, and furthermore imposed some condition on the "distance" between offending images (to ensure some semblance of independence). At least that's what I'd do after thinking about the problem for a minute.
I assume they've tested NeuralHash on big datasets of innocuous pictures, and gotten some sort of bound on the probability of false positives p, and then chosen N such that p^N << 10^-12
What's interesting about this faulty argument is that it hinges an assumption that "innocuous pictures" is a well defined space that you can use for testing and get reliable predictions from.
A neural network does classification by drawing a complex curve between one large set and another large set on a high dimensional feature space. The problem is those features can include, often include, incidental things like lighting, subject placement and so-forth. And this often work because your target data set really does uniquely have feature X. So you can get a result that your system can reliably find X but when you go out to the real world, you find those incidental features.
I don't know exactly how the NeuralHash works but I'd presume it has the same fundamental limitations. It has to find images even they've been put through easy filters that are going to change every particular pixel so it's hard to see how it wouldn't find picture A that looking like picture B if you squint.
“If it works as designed” is I think where Gruber’s article does it’s best work: he explains that the design is pretty good, but the if is huge. The slippery slope with this is real, and even though Apple’s chief of privacy has basically said everything everyone is worried about is currently impossible, “currently” could change tomorrow if Apple’s bottom line is threatened.
I think their design is making some really smart trade offs, given the needle they are trying to thread. But it shouldn’t exist at all, in my opinion; it’s too juicy a target for authoritarian and supposedly democratic governments to find out how to squeeze Apple into using this for evil.
The EFF wrote a really shitty hit piece deliberately confused the parental management function with the matching against hashes of illegal images. Two different things. From there, a bazillion hot takes followed.
The EFF article refers to a "classifier", not just matching hashes.
So, three different things.
I don't know how much you know about them, but this is what the EFF's role is. Privacy can't be curtailed uncritically or unchecked. We don't have a way to guarantee that Apple won't change how this works in the future, that it will never be compromised domestically or internationally, or that children and families won't be harmed by it.
It's an unauditable black box that places one of the highest, most damaging penalties in the US legal system against a bet that it's a perfect system. Working backwards from that, it's easy to see how anything that assumes its own perfection is an impossible barrier for individuals, akin to YouTube's incontestable automated bans. Best case, maybe you lose access to all of your Apple services for life. Worst case, what, your life?
When you take a picture of your penis to send to your doctor and it accidentally syncs to iCloud and trips the CSAM alarms, will you get a warning before police appear? Will there be a whitelist to allow certain people to "opt-out for (national) security reasons" that regular people won't have access to or be able to confirm? How can we know this won't be used against journalists and opponents of those in power, like every other invasive system that purports to provide "authorized governments with technology that helps them combat terror and crime[1]".
Someone's being dumb here, and it's probably the ones who believe that fruit can only be good for them.
> When you take a picture of your penis to send to your doctor and it accidentally syncs to iCloud and trips the CSAM alarms, will you get a warning before police appear?
You would have to have not one, but N perceptual hash collisions with existing CSAM (where N is chosen such that the overall probability of that happening is vanishingly small). Then, there'd be human review. But no, presumably there won't be a warning.
> Will there be a whitelist to allow certain people to "opt-out for (national) security reasons" that regular people won't have access to or be able to confirm?
Everyone can opt out (for now at least) by disabling iCloud syncing. (You could sync to another cloud service, but chances are that then they're scanned there.)
Beyond that, it would be good if Apple built it verifiably identically across jurisdictions. (If you think that Apple creates malicious iOS updates targeting specific people, then you have more to worry about than this new feature.)
> How can we know this won't be used against journalists and opponents of those in power, like every other invasive system that purports to provide "authorized governments with technology that helps them combat terror and crime[1]".
By ensuring that a) the used hash database is verifiably identical across jurisdictions, and b) notifications go only to that US NGO. Would be nice if Apple could open source that part of the iOS, but unless one could somehow verify that that's what's running on the device, I don't see how that would alleviate the concerns.
That isn't an offer of legal protections or guarantees that the trustworthiness and accuracy of their methods can be verified in court.
It really doesn't matter how they do it now that we know that iOS has vulnerabilities that allow remote monitoring and control of someone's device to the extent that it created a market for at least one espionage tool that has lead to the deaths of innocent people.
I remember when the popular way to shut down small forums, business competitors, or get embarrassing information taken off the web was to anonymously upload CP to it and then report it, repeatedly. With this, what's to stop virtual "SWATing" of Apple customers? Not necessarily just those whose Apple products have been compromised, whose iClouds have been compromised, or who are the victims of hash collisions (see any group of non-CSAM images that CSAM detection flags).
Will Apple analyze all hardware to ensure no innocent person is framed because of an undisclosed vulnerability? What checks are being offered on this notoriously burdensome process on the accused?
>If you think that Apple creates malicious iOS updates targeting specific people, then you have more to worry about than this new feature
You’re making my point. Nobody cares about your penis pictures.
EFF is an advocacy group, and you need to read between the lines what they say because they have a specific set of principles that may or may not align with reality. They published a bad article and took an extreme stance about what could be as opposed to what is.
Parents care about children sending or receiving explicit material. This is for behavioral, moral and liability reasons.
When your 12 year old boy sends a dick pic to his girlfriend, that may be a felony. When your 16 year old daughter sends an explicit picture to her 18 year old crush, that person may be committing a felony by possessing it.
So why doesn't Apple release an iPod Touch or iPhone without a camera? Maybe release child safe versions of their apps that can't send or receive images from unapproved (by parents) parties?
There seem to be an endless number of ways to achieve what they claim without invasively scanning people's private data.
EFF today is really not the organisation it was just a few years ago. I dont know who they hired badly, but the reasoned takedowns have been replaced with hysterical screaming.
"The Messages feature is specifically only for children in a shared iCloud family account. If you’re an adult, nothing is changing with regard to any photos you send or receive through Messages. And if you’re a parent with children whom the feature could apply to, you’ll need to explicitly opt in to enable the feature. It will not turn on automatically when your devices are updated to iOS 15."
I still don't understand how this is allowed. If the police want to see the photos on my device, then they need to get a warrant to do so. Full stop. This type of active scanning should never be allowed. I hope that someone files a lawsuit over this.
Speculating (IANAL) - it's only when iCloud photos is enabled. I'd guess this is akin to third party hosting the files, I think the rules around that are more complex.
...as a pre-requisote of avoiding criminal liability for statutory violation of a Federal statute. Transitive property of logic therefore yields that this private entity is acting as a Government proxy. Therefore, Constitutional considerations.
It's also not even wrong in so many ways that it really highlights how far DF has fallen over the years. Really ugly stuff, handwaving about hashing and nary a mention of perceptual hashing and collisions. Not a technology analysis of any sort.
Non-zero being technically true because of the subject matter, but I don’t see how Apple’s system increases the risk of authorities killing family or pets more than server-side scanning.
Their neural hashing is new, and they claim has a one in a trillion collision rate. There are 1.5 trillion images created in the US and something like 100 million photos in the compared database. That's a heck of a lot of collisions. And that's just a single year, Apple will be comparing everyone's back catalog.
A lot of innocent people are going to get caught up in this.
We’ll have to wait and see how good their neural hashing is, but just to clarify the 1 trillion number is the “probability of incorrectly flagging a given account” according to Apple’s white paper.
I think some people think that’s the probability of a picture being incorrectly flagged, which would be more concerning given the 1.5 trillion images created in the US.
"The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match..."
So it's 1 in 1 trillion per account PRIOR to manual review in which the odds of error get reduced even further.
How is it that you are going to "wait and see how good their neural hashing is"? Do you think there is going to be any shred of transparency about the operation of this system? It is completely unaccountable - starting with Apple and going on to NCMEC and the FBI.
I think you're wrong about the risk (the paper says per account), but even so you need to compare it to the alternatives.
Photos in iCloud are unencrypted and Apple checks for CSAM on the unencrypted photos server side, they know of all matches.
OR
Photo hashes are checked client side and only if a certain threshold of matches is passed does Apple get notified at all (at which point there's a sanity check for false positive by a person). This would allow all photos on iCloud to be able to be encrypted e2e.
Both only happen when iCloud photo backup is enabled.
Gruber practically (no, perhaps actually) worships Apple. He'd welcome Big Brother into his house if it came with an Apple logo, and he'd tell us how we were all wrong for distrusting it. He's not the voice to listen to this time, and you should trust him to have your best interests at heart.
People are furious with Apple, and there's no reason to discount the completely legitimate concerns they have. This is a slippery slope into hell.
It's a good thing congress is about to start regulating Apple and Google. Maybe our devices can get back to being devices instead of spy tools, chess moves, and protection rackets.
(read: Our devices are supposed to be property. Property is something we fully own that behaves the way we want. It doesn't spy on us. Property is something we can repair. And it certainly is not a machination to fleece the industry by stuffing us into walled and taxed fiefdoms, taking away our control. Discard anything that doesn't behave like property.)
[edit: I've read Gruber's piece on this. It's wish-washy, kind of like watching a moderate politician dance on the party line. Not the direct condemnation this behavior deserves. Let's not take his wait and see approach with Dracula.]
> Gruber practically (no, perhaps actually) worships Apple. He'd welcome Big Brother into his house if it came with an Apple logo, and he'd tell us how we were all wrong for distrusting it.
You mean the same Gruber who described the situation as “justifiably, receiving intense scrutiny from privacy advocates.”? The one who said “this slippery-slope argument is a legitimate concern”?
I'm having a hard time reconciling your pat dismissal with the conclusion of his piece which very clearly rejects the position you're attributing to him as grounds for dismissal:
> But the “if” in “if these features work as described and only as described” is the rub. That “if” is the whole ballgame. If you discard alarmism from critics of this initiative who clearly do not understand how the features work, you’re still left with completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.
I mean, sure, know where he's coming from but be careful not to let your own loyalties cause you to make a bad-faith interpretation of a nuanced position on a complex issue.
If icloud backup works as advertised - it backs up your device.
However, if we consider the slipery slope, under pressure from a shaddow government, the contents of your phone could have been uploaded to the CIA every day, including live recordings 24 hours a day.
> the contents of your phone could have been uploaded to the CIA every day, including live recordings 24 hours a day.
You’re describing new functionality which would have to be added in many places: in addition to building that service they have to turn off the recording indicators and prompts, coexist with other apps recording, not have recording pause playback in other apps like normal, masking data usage on both the phone and your carrier’s reports, concealing the battery loss and putting in bigger hardware batteries to compensate, etc.
That’s technically possible but there’s no link to this feature - that hypothetical government would need to do the same things either way. It’s similarly not Apple-specific: with that level of control the same thing would happen Android and anything else.
Phone has functionality to backup its contents. Phone has functinality to record things. No new functionality needed.
Thus, they are backdoors built into the system with the ability to record everything and upload it to an authourtarian regime for the genocide of the human race.
But we all know the real evil here is using a hashing algorithm to check images you upload to their server for known kiddie porn.
So why are people freaking out about a hash scanning system as an invasion of privacy when the grab anything from your phone contents feature has been there for 10+ years?
Policy-wise Apple have just said "yes, we are going to use our super-admin powers to push updates to turn your phone against you".
Sure, a suitably powerful authoritarian org could do lots of secret things, but that isn't what happens in real life: in real life you publicly change policy in increments and get everyone to go along with it.
"Apple was actually scanning all users photos for CSAM regardless of iCloud usage due to a bug in the most recent firmware" is a headline that is guaranteed in the future.
There are legitimate things to be concerned about, but 99% of internet discussion on this topic is junk.