This is a policy issue in both cases - policy can change (for the worse) in both cases.
The comparison is about unencrypted photos in iCloud or this other method that reveals less user information by running some parts of it client side (only if iCloud photos are enabled) and could allow for e2e encryption on the server.
The argument of "but they could change it to be worse!" applies to any implementation and any policy. That's why the specifics matter imo. Apple controls the OS and distribution, governments control the legislation (which is hopefully correlated with the public interest). The existing 'megacorp' model doesn't have a non-policy defense to this kind of thing so it's always an argument about policy. In this specific implementation I think the policy is fine. That may not hold if they try to use it for something else (at which point it's worth fighting against whatever that bad policy is).
Apple's good solutions to the CSAM problem (which I think thread the needle for a decent compromise) could prevent worse policy from the government later (attempts to ban encryption or require key escrow like in the 90s).
This implementation as it stands reveals less information about end users and could allow them to enable e2ee for photos on their servers - that's a better outcome than the current state (imo).
1. Encrypt everything in the cloud but upload the hashes of these items as well on the device. Also notify us so we can notify law enforcement if they're doing some illegal stuff.
2. Everything is unencrypted in the cloud. No actions are taken on the device. No notifications to authorities.
With option one the sanctity of the device ownership is breached. With option two it's maintained. Maintaining that stark distinction is hugely important for what future actions can be taken in the public eye. Normalizing on device actions that work against the user must be fought at every instance they occur.
Your line of thinking dangerous because you're ignoring the public perception of a device you own actively working against you. Apple's behavior cannot be allowed to be considered normal.
To be fair - I think reasonable people can disagree on this.
I don't think it's a rationalization to point out that it only occurs when the same baseline conditions are met (using the cloud). I think those constraints/specifics matter. I wouldn't be in favor of the policy if they were different (and I'm not even sure I'm in favor of it now).
My personally preferred outcome would be e2ee by default for everything without any of this, but I also understand the concerns of NCMEC and the general tradeoffs/laws around this stuff (and future regulatory risk of CSAM) - and just the general issue of reducing child sexual abuse.
I am also in favour of E2E by default for everything without any device or cloud based scanning. However, Apple doesn't want to be caught in having developed a service that enables for child exploitation. Doing nothing may have even more invasive requirements legally forced by government, so Apple is stuck with a dilemma. Also lets not forget that Apple should also not want child exploitation to occur and therefore also should do something.
The question I have for drenvuk is how else is Apple able to prevent or detect child exploitation and the storage or distribution of content such as this on Apple's services?
This is a policy issue in both cases - policy can change (for the worse) in both cases.
The comparison is about unencrypted photos in iCloud or this other method that reveals less user information by running some parts of it client side (only if iCloud photos are enabled) and could allow for e2e encryption on the server.
The argument of "but they could change it to be worse!" applies to any implementation and any policy. That's why the specifics matter imo. Apple controls the OS and distribution, governments control the legislation (which is hopefully correlated with the public interest). The existing 'megacorp' model doesn't have a non-policy defense to this kind of thing so it's always an argument about policy. In this specific implementation I think the policy is fine. That may not hold if they try to use it for something else (at which point it's worth fighting against whatever that bad policy is).
Apple's good solutions to the CSAM problem (which I think thread the needle for a decent compromise) could prevent worse policy from the government later (attempts to ban encryption or require key escrow like in the 90s).
Basically what I said here: https://hackernews.hn/item?id=28162418
This implementation as it stands reveals less information about end users and could allow them to enable e2ee for photos on their servers - that's a better outcome than the current state (imo).