HN2new | past | comments | ask | show | jobs | submitlogin

What are those cases where they might be checked by humans? To determine whether it's an innocent baby bath? If you have naked photos of a partner which happen to hit a statistical match for certain patterns that are similar to CSAM? These aren't far fetched scenarios, these are exactly the most likely types of photos that would be likely flagged. Are you okay with those photos being passed around Apple's security review team for entertainment? Leaked to the press if you later run for office?

How about in 15 years when your small children aren't small? Is this the magical software that can tell the difference between 18 year old boobs and 17 year old? The danger isn't to child molesters, it's to people who get incorrectly flagged as child molesters and need to fight to prove their innocence.



I’m not even sure if it's a joke or you are serious.

It is a check against existing hashes in a big database of confirmed CSAM. What are the chances that photos of your partner are in that database? If your partner is older than 12 - it's 0%.

Who is taking more risk to be sued for the leakage of the photos, you or Apple?

The last part doesn't worth to be discussed because children in that DB are younger than 12.


I've now read up on NeuralHash a bit more, and while I think the idea that this is just a hash is slightly overstated, you're right and my above comment assumed this was a classifier rather than a perceptual hash.


It is very far fetched. The system matches copies of specific photos, not subject matter like “baby taking a bath”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: