Scanning iCloud Photos for Child Sexual Abuse
The Child Sexual Abuse Material (CSAM) Scanning Tool allows website owners to proactively identify and take action on CSAM located on their website. By enabling this tool, Cloudflare will compare content served for your website through the Cloudflare cache to known lists of CSAM. These lists are provided to Cloudflare by leading child safety advocacy groups such as the National Center for Missing and Exploited Children (NCMEC).
Financial Times (via Hacker News, reprint):
Apple plans to scan US iPhones for child abuse imagery
Matthew Green (via Hacker News):
I’ve had independent confirmation from multiple people that Apple is releasing a client-side tool for CSAM scanning tomorrow. This is a really bad idea.
These tools will allow Apple to scan your iPhone photos for photos that match a specific perceptual hash, and report them to Apple servers if too many appear.
[…]
This sort of tool can be a boon for finding child pornography in people’s phones. But imagine what it could do in the hands of an authoritarian government?
[…]
The way Apple is doing this launch, they’re going to start with non-E2E photos that people have already shared with the cloud. So it doesn’t “hurt” anyone’s privacy.
It’s implied but not specifically stated that they are not scanning the contents of iCloud Backup (which is not E2E), only iCloud Photo Library.
But you have to ask why anyone would develop a system like this if scanning E2E photos wasn’t the goal.
[…]
Hashes using a new and proprietary neural hashing algorithm Apple has developed, and gotten NCMEC to agree to use.
We don’t know much about this algorithm. What if someone can make collisions?
Or what if the AI simply makes mistakes?
Chance Miller (Apple, Hacker News, MacRumors):
Apple is today announcing a trio of new efforts it’s undertaking to bring new protection for children to iPhone, iPad, and Mac. This includes new communications safety features in Messages, enhanced detection of Child Sexual Abuse Material (CSAM) content in iCloud, and updated knowledge information for Siri and Search.
[…]
If there is an on-device match, the device then creates a cryptographic safety voucher that encodes the match result. A technology called threshold secret sharing is then employed. This ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content.
[…]
Apple isn’t disclosing the specific threshold it will use — that is, the number of CSAM matches required before it is able to interpret the contents of the safety vouchers. Once that threshold is reached, however, Apple will manually review the report to confirm the match, then disable the user’s account, and sent a report to the National Center for Missing and Exploited Children.
There’s a technical summary here.
Many other cloud storage services are already doing that, in a much less privacy-preserving way. In a way, it’s their responsibility given that they’re storing the data and it is illegal to possess such content in many parts of the world.
I have always been concerned that this system could be weaponized as a way gain access to someone’s account. For example:
- Add the hash of a non-pornographic image to the database
- Using a burner email address, email the non-pornographic image to the target’s Gmail address. The target wouldn’t think anything of it.
- The innocent image would trigger a CP alert, giving law enforcement the pretense it needs to access the account
I wonder how easy it is to add a photo to someone’s iCloud Photo Library.
What they say: “This algorithm will scan your images for potential child abuse”
What it will actually do: Looks at your nudes without your consent and sends them to a team who will of course have people who save them and share them when they see its not child abuse.
That would never happen, of course. Apple would probably argue that you don’t really have to trust their team because threshold secret sharing will prevent them from needing to review the images, anyway. But who knows what threshold they’re using or how reliable the perceptual hashing actually is.
One takeaway is that, CSAM detection aside, Apple already has access to these photos. You shouldn’t upload anything to the cloud that you want to keep private. But Apple isn’t giving users much choice. It doesn’t let you choose a truly private cloud backup or photo syncing provider. If you don’t use iCloud Photo Library, you have to use Image Capture, which is buggy. And you can’t use iCloud to sync some photos but not others. Would you rather give Apple all your photos or risk losing them?
And, now that the capability is built into Apple’s products, it’s hard to believe that they won’t eventually choose to or be compelled to use it for other purposes. They no longer have the excuse that they would have to “make a new version of the iPhone operating system.” It probably doesn’t even require Apple’s cooperation to add photo hashes to the database.
Previously:
- Apple Dropped Plans for End-to-End Encrypted iCloud Backups After FBI Objected
- The Time Tim Cook Stood His Ground Against the FBI
- Facebook Solicits Nude Photos to Stop Revenge Porn
- Yahoo’s FISA E-mail Scan
- FBI Asks Apple for Secure Golden Key
- Apple Patches “Find My iPhone” Exploit
Update (2021-08-06): Nick Heer, regarding my question about adding a photo to someone else’s iCloud Photo Library:
AirDropped images are automatically added to the photo library, aren’t they?
Because Apple is scanning iCloud Photos for the CSAM flags, it makes sense that the feature does not work with iCloud Photos disabled. Apple has also confirmed that it cannot detect known CSAM images in iCloud Backups if iCloud Photos is disabled on a user’s device.
I think a fair counterargument is that Apple’s more proactive approach to child safety takes away one of law enforcement’s favourite complaints about commonplace encryption.
But it represents a similar trade-off to the aforementioned iCloud backups example. Outside of the privacy absolutist’s fictional world, all of privacy is a series of compromises. Today’s announcements raise questions about whether these are the right compromises to be making. What Apple has built here is a local surveillance system that all users are supposed to trust. We must believe that it will not interfere with our use of our devices, that it will flag the accounts of abusers and criminals, and that none of us innocent users will find ourselves falsely implicated. And we must trust it because it is something Apple will be shipping in a future iOS update, and it will not have an “off” switch.
Perhaps this is the only way to make a meaningful dent in this atrocious abuse, especially since the New York Times and the NCMEC shamed Apple for its underwhelming reporting of CSAM on its platforms. But are we prepared for the likely expansion of its capabilities as Apple and other tech companies are increasingly pressured to shoulder more responsibility for the use of their products? I do not think so. This is a laudable effort, but enough academics and experts in this field have raised red flags for me to have some early concerns and many questions.
Andrew Orr (in 2019, MacRumors):
Occasionally I like to check up on Apple’s security pages and privacy policies. I noticed something new in the privacy policy, which was last updated May 9, 2019. Under the “How we use your personal information” header, one of the paragraphs now reads (emphasis added):
We may also use your personal information for account and network security purposes, including in order to protect our services for the benefit of all our users, and pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material.
Apple may have even been doing this for years, but this is the first time this has appeared in its privacy policy. And I checked earlier versions using the Wayback Machine.
[…]
Speaking at CES 2020, Apple’s chief privacy officer Jane Horvath mentioned photos backed up to iCloud in terms of scanning.
[…]
A search warrant revealed that Apple scans emails for this content.
Apple’s scanning does not detect photos of child abuse. It detects a list of known banned images added to a database, which are initially child abuse imagery found circulating elsewhere. What images are added over time is arbitrary. It doesn’t know what a child is.
Apple thinks photo scanning is non-negotiable — that for legal and PR reasons, you can’t be a major consumer tech company and not scan users’ photos — so the only way to encrypt photos on-device was to develop & implement client-side scanning.
My read is that the FBI keeps harping about CSAM and “going dark”. It’s the hardest thing to defend, so now they can say “no one can use iCloud to store CSAM and I won’t build a backdoor into iCloud encryption”
They are if they are moving server-side scanning to “client-side hashing then matching on the server-side”. If this is a pre-req for encrypted iCloud data, then this is potentially a win. But, this is all negated by absence of auditability of the hash DB.
If it came out that Apple was adding anything other than CSAM fingerprints to the database, it’d be ruinous to the company’s reputation. As bad as if they were pilfering from Apple Cash accounts.
It sounds like Apple is not adding anything to the database, so it’s not in a position to make any guarantees. It’s just using an opaque list of hashes supplied by a third party.
The hash databases used by CSAM scanning methods have little oversight.
[…]
In any case, all of this requires us to place trust in automated systems using unproven machine learning magic, run by technology companies, and given little third-party oversight. I am not surprised to see people worried by even this limited scope, never mind the possibilities of its expansion.
Government: <adds images known to be from target to database>
Apple: <matches, uploads contents of target’s phone to government server for further inspection>
Government: thanku appl
Whoever controls this list can search for whatever content they want on your phone, and you don’t really have any way to know what’s on that list because it’s invisible to you (and just a bunch of opaque numbers, even if you hack into your phone to get the list.)
The theory is that you will trust Apple to only include really bad images. Say, images curated by the National Center for Missing and Exploited Children (NCMEC). You’d better trust them, because trust is all you have.
[…]
This means that, depending on how they work, it might be possible for someone to make problematic images that “match” entirely harmless images. Like political images shared by persecuted groups. These harmless images would be reported to the provider. […] And the problem is that none of this technology was designed to stop this sort of malicious behavior. In the past it was always used to scan unencrypted content. If deployed in encrypted systems (and that is the goal) then it provides an entirely new class of attacks.
[…]
Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content.
That’s the message they’re sending to governments, competing services, China, you.
EFF (tweet, Hacker News, MacRumors):
All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.
[…]
Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the “end-to-end” promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company’s stance toward strong encryption.
But knowing this uses a neural net raises all kinds of concerns about adversarial ML, concerns that will need to be evaluated.
Apple should commit to publishing its algorithms so that researchers can try to develop “adversarial” images that trigger the matching function, and see how resilient the tech is.
I am vehemently opposed to scanning of personal information, be it in the cloud (under end-to-end encryption), or on our local devices. The long term risk for misuse of such technology far outweighs any short term benefit.
[…]
There are world governments of all kinds, and they all have questionable policies of varying degrees. As soon they tell a corporation implement their dubious dragnet or suffer the consequences, the corporation will promptly give them access to your photos, emails, any other data.
The reason Apple’s approach is going far too far comes down to one thing: the difference between law enforcement, where an agency needs good reason to access private data, and surveillance. Apple’s approach is surveillance. (And from the company that made the 1984 ad.)
A narrowly defined backdoor is still a backdoor. “Partial” digital privacy isn’t a thing -- you either have it or you don’t.
If you think you can design a system that violates privacy only for some people, you can’t. I don’t care who you are.
Apple has won enormous amounts of goodwill by declaring that privacy is a human right, and is about to destroy all of it at once by building a technology to have your phone scan your pictures and turn you over to law enforcement if they’re the wrong sort of pictures.
It doesn’t matter what sort of pictures motivated this feature; eventually governments will force its use for all sorts of things, and many governments do not respect human rights. I’m completely aghast that this is being contemplated.
Here’s the thing about “slippery slope” arguments: a slope is rarely slippery, but it still goes downhill.
It took 12 years to go from “your Mac app needs to be code signed for the keychain and firewall” to “you need to upload every build of your Mac app to Apple for approval”.
It is difficult for me to reconcile the Apple that makes ostensibly clever machine learning stuff that can match child abuse imagery, even after it has been manipulated, with the Apple that makes software that will fail to sync my iPhone for twenty minutes before I give up.
Same with the iMessage scanning feature and iMessage itself.
Now that Apple has willingly built spyware into iOS and macOS, within 10 years this tech will:
(1) be mandated by government in all end-to-end encrypted apps; and
(2) expand to scan for terrorism, disinformation, "misinformation", then eventually political images and memes.
This is not a drill.
Police are already misusing location data gathered for COVID contact tracing even though everyone SWORE it wouldn’t be used for anything by health purposes.
Clearly a rubicon moment for privacy and end-to-end encryption.
I worry if Apple faces anything other than existential annihilation for proposing continual surveillance of private messages then it won’t be long before other providers feel the pressure to do the same.
[…]
If Apple are successful in introducing this, how long do you think it will be before the same is expected of other providers? Before walled-garden prohibit apps that don’t do it? Before it is enshrined in law?
Really seems like Apple tried to protect customer data in the cloud by scanning for illegal material locally on the phone, thereby creating a new kind of risk for customer data on the phone.
To address these concerns, Apple provided additional commentary about its plans today.
Apple’s known CSAM detection system will be limited to the United States at launch, and to address the potential for some governments to try to abuse the system, Apple confirmed to MacRumors that the company will consider any potential global expansion of the system on a country-by-country basis after conducting a legal evaluation.
[…]
Even if the threshold is exceeded, Apple said its manual review process would serve as an additional barrier and confirm the absence of known CSAM imagery. Apple said it would ultimately not report the flagged user to NCMEC or law enforcement agencies and that the system would still be working exactly as designed.
I wonder how much manual review Apple is planning to do, given that it says there’s only a 1 in 1 trillion probability of incorrectly flagging an account.
In an internal memo distributed to the teams that worked on this project and obtained by 9to5Mac, Apple acknowledges the “misunderstandings” around the new features, but doubles down on its belief that these features are part of an “important mission” for keeping children safe.
It’s hard not to feel that a bait and switch is being presented. Apple announced that disabling iCloud Photos bypasses CSAM detection. This practically ensures failure, as anyone involved in child exploitation will of course disable iCloud Phots. So then what? Set up to fail...
So we already have the on-device detection, and limiting it to iCloud Photos will fail. This means that further measures will be required, i.e., scanning regardless of whether iCloud Photos is enabled.
Seems like Apple’s idea of doing iCloud abuse detection with this partially-on-device check only makes sense in two scenarios: 1) Apple is going to expand it to non-iCloud data stored on your devices or 2) Apple is going to finally E2E encrypt iCloud?
But if it is to enable end-to-end iCloud encryption and it is not applied to purely local files, that seems like an overall privacy benefit.
If we follow that line of speculation further, it makes me wonder why Apple would create so much confusion in its communication of this change. Why drop this news at the beginning of August, disconnected from any other product or service launch? Why not announce it and end-to-end iCloud encryption at the same time, perhaps later this year?
Update (2021-08-09): John Gruber:
The database will be part of iOS 15, and is a database of fingerprints, not images. Apple does not have the images in NCMEC’s library of known CSAM, and in fact cannot — NCMEC is the only organization in the U.S. that is legally permitted to possess these photos.
[…]
All of these features are fairly grouped together under a “child safety” umbrella, but I can’t help but wonder if it was a mistake to announce them together. Many people are clearly conflating them, including those reporting on the initiative for the news media.
[…]
In short, if these features work as described and only as described, there’s almost no cause for concern. […] But the “if” in “if these features work as described and only as described” is the rub. That “if” is the whole ballgame. If you discard alarmism from critics of this initiative who clearly do not understand how the features work, you’re still left with completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.
Glenn Fleishman and Rich Mogull:
The problem is that exploitation of children is a highly asymmetric problem in two different ways. First, a relatively small number of people in the world engage in a fairly massive amount of CSAM trading and direct online predation.
[…]
The other form of asymmetry is adult recognition of the problem. Most adults are aware that exploitation happens—both through distribution of images and direct contact—but few have personal experience or exposure themselves or through their children or family. That leads some to view the situation somewhat abstractly and academically. On the other end, those who are closer to the problem—personally or professionally—may see it as a horror that must be stamped out, no matter the means. Where any person comes down on how far tech companies can and should go to prevent exploitation of children likely depends on where they are on that spectrum.
[…]
(Spare some sympathy for the poor sods who perform the “manual” job of looking over potential CSAM. It’s horrible work, and many companies outsource the work to contractors, who have few protections and may develop PTSD, among other problems. We hope Apple will do better. Setting a high threshold, as Apple says it’s doing, should dramatically reduce the need for human review of false positives.)
[…]
Apple’s head of privacy, Erik Neuenschwander, told the New York Times, “If you’re storing a collection of C.S.A.M. material, yes, this is bad for you. But for the rest of you, this is no different.”
Given that only a very small number of people engage in downloading or sending CSAM (and only the really stupid ones would use a cloud-based service; most use peer-to-peer networks), this is a specious remark, akin to saying, “If you’re not guilty of possessing stolen goods, you should welcome an Apple camera in your home that lets us prove you own everything.” Weighing privacy and civil rights against protecting children from further exploitation is a balancing act. All-or-nothing statements like Neuenschwander’s are designed to overcome objections instead of acknowledging their legitimacy.
What happens when China announces its version of the NCMEC, which not only includes the horrific imagery Apple’s system is meant to capture, but also images and memes the government deems illegal?
The fundamental issue — and the first reason why I think Apple made a mistake here — is that there is a meaningful difference between capability and policy. One of the most powerful arguments in Apple’s favor in the 2016 San Bernardino case is that the company didn’t even have the means to break into the iPhone in question, and that to build the capability would open the company up to a multitude of requests that were far less pressing in nature, and weaken the company’s ability to stand up to foreign governments. In this case, though, Apple is building the capability, and the only thing holding the company back is policy.
[…]
Apple is compromising the phone that you and I own-and-operate, without any of us having a say in the matter. Yes, you can turn off iCloud Photos to disable Apple’s scanning, but that is a policy decision; the capability to reach into a user’s phone now exists, and there is nothing an iPhone user can do to get rid of it.
@Apple now circulating a propaganda letter describing the internet-wide opposition to their decision to start checking the private files on every iPhone against a secret government blacklist as “the screeching voices of the minority.”
The NCMEC database […] contains countless non-CSAM pictures that are entirely legal not only in the U.S. but globally. […] Increasing the scope of scanning is barely a slippery slope, they’re already beyond the stated scope of the database.
This is where the human reviewers come in. In theory, it doesn’t matter if the database contains non-CSAM pictures—either because they were collected along with CSAM ones or because a government deliberately added them to the database—because the reviewers will see that the user did not actually have CSAM and so will decline to make a report. However, this assumes (1) a quality of review that Apple has not previously demonstrated, and (2) that Apple will not be pressured or tricked into hiring reviewers that are working towards another purpose.
What would you say if Apple announced that Siri will always listen and report private conversations (not just those triggered by “Hey Siri”) but only if a really good neural network recognizes them as criminal, and there’s PSI to protect you?
RE: Apple’s plan to scan every photo in iMessage with machine learning and alert parents to nudity. […] Let me share so you can imagine how it will be misused.
Steve Troughton-Smith (also Paul Haddad):
I feel like Apple could easily have built these new features to outright prevent explicit/illegal material from being viewed or saved on its platforms, while sidestepping the slippery slope outcry entirely. […] I mean why are they letting this stuff onto iCloud Photos in the first place?
Perhaps the thinking is that the matching needs to remain hidden so that people can’t learn how to evade it.
We’re past the point where giving Apple the benefit of the doubt can be interpreted as anything other than willful ignorance from a place of Western privilege. These aren’t hypotheticals, we already have examples of Apple’s policies failing people in other countries.
So end-to-end encryption means nothing?
Device maker can log/view/save your content right before it gets sent (encrypted) or right after it’s received (unencrypted), but your content was still E2E encrypted!
In my opinion, there are no easy answers here. I find myself constantly torn between wanting everybody to have access to cryptographic privacy and the reality of the scale and depth of harm that has been enabled by modern comms technologies.
[…]
I have friends at both the EFF and NCMEC, and I am disappointed with both NGOs at the moment. Their public/leaked statements leave very little room for conversation, and Apple’s public move has pushed them to advocate for their equities to the extreme.
[…]
Likewise, the leaked message from NCMEC to Apple’s employees calling legitimate questions about the privacy impacts of this move “the screeching voices of the minority” was both harmful and unfair.
[…]
One of the basic problems with Apple’s approach is that they seem desperate to avoid building a real trust and safety function for their communications products. There is no mechanism to report spam, death threats, hate speech, NCII, or any other kinds of abuse on iMessage.
As a result, their options for preventing abuse are limited.
Say you’re a big Apple fan who is really upset with the photo scanning announcement. In order to send a market signal by switching phones, you would also have to buy a new watch, give up AirDrop / iMessage with your friends, not watch Ted Lasso on your new phone, etc etc etc
At some point ecosystem lock-in creates to many different switching costs that the market can no longer send meaningful signals about what’s important, leaving only public opinion and government regulation to shape a company’s behavior. That feels real icky to me!
Apple’s dark patterns that turn iCloud uploads on by default, and flip it back on when moving to a new phone or switching accounts, exacerbate the problem.
More specifically, the concern involves where this type of technology could lead if Apple is compelled by authorities to expand detection to other data that a government may find objectionable. And I’m not talking about data that is morally wrong and reprehensible. What if Apple were ordered by a government to start scanning for the hashes of protest memes stored on a user’s phone? Here in the U.S., that’s unlikely to happen. But what if Apple had no choice but to comply with some dystopian law in China or Russia? Even in Western democracies, many governments are increasingly exploring legal means to weaken privacy and privacy-preserving features such as end-to-end encryption, including the possibility of passing legislation to create backdoor access into messaging and other apps that officials can use to bypass end-to-end encryption.
So these worries people are expressing today on Twitter and in tech forums around the web are understandable. They are valid. The goal may be noble and the ends just—for now—but that slope can also get slippery really fast.
While child exploitation is a serious problem, and while efforts to combat it are almost unquestionably well-intentioned, Apple’s proposal introduces a backdoor that threatens to undermine fundamental privacy protections for all users of Apple products.
[…]
Apple’s current path threatens to undermine decades of work by technologists, academics and policy advocates towards strong privacy-preserving measures being the norm across a majority of consumer electronic devices and use cases. We ask that Apple reconsider its technology rollout, lest it undo that important work.
Most of the heat RE: neuralMatch is rooted in ignorance of what it does. I’m not here to educate.
But there’s a valid worry that hostile governments could use it to rat out their citizens for non-CSAM offenses.
Some concrete actions Apple could take to fix that[…]
[…]
Guarantee the database is global, not a localized resource.
[…]
Publish neuralMatch as an all-purpose image matching API, so third parties can audit it on a technical level.
[…]
Allow third parties to test the neuralMatch API specifically against the CSAM hashes, so they can audit it for the kinds of politically-motivated matches people are worried about.
Looks like the NeuralHash is included in the current beta in the Vision framework.
Oliver Kuederle (via Hacker News):
At my company, we use “perceptual hashes” to find copies of an image where each copy has been slightly altered. This is in the context of stock photography, where each stock agency (e.g. Getty Images, Adobe Stock, Shutterstock) adds their own watermark, the image file ID, or sharpens the image or alters the the colours slightly, for example by adding contrast.
[…]
It shouldn’t come as a surprise that these algorithms will fail sometimes. But in the context of 100 million photos, they do fail quite often. And they don’t fail in acceptable ways[…]
The laws related to CSAM are very explicit. 18 U.S. Code § 2252 states that knowingly transferring CSAM material is a felony. (The only exception, in 2258A, is when it is reported to NCMEC.) In this case, Apple has a very strong reason to believe they are transferring CSAM material, and they are sending it to Apple -- not NCMEC.
It does not matter that Apple will then check it and forward it to NCMEC. 18 U.S.C. § 2258A is specific: the data can only be sent to NCMEC. (With 2258A, it is illegal for a service provider to turn over CP photos to the police or the FBI; you can only send it to NCMEC. Then NCMEC will contact the police or FBI.) What Apple has detailed is the intentional distribution (to Apple), collection (at Apple), and access (viewing at Apple) of material that they strongly have reason to believe is CSAM. As it was explained to me by my attorney, that is a felony.
The problem with any take on the Apple/CSAM stuff is that there are so many horrible people in the world that do horrible things to people, and so many governments that do horrible things to people, and any pretty much any tech that thwarts one of them enables the other one.
There’s an argument, with support from Game Theory, that says that Apple can set a high threshold for the number of matches, and only detect and report a few cases of CSAM. Indeed, even that may be unnecessary to drive anyone currently sharing CSAM to abandon the use of iCloud Photos altogether.
That would be a win for Apple but not really help solve the problem as a whole.
The worst case scenario for the initial implementation isn’t necessarily false positives, though those would certainly be awful.
Worst case scenario is child abusers don’t use iCloud Photos, and Apple’s NCMEC report #s don’t increase much.
CyberTipline is the nation’s centralized reporting system for the online exploitation of children, including child sexual abuse material, child sex trafficking and online enticement. In 2020, the CyberTipline received more than 21.7 million reports.
Only 265 were from Apple. I’m not sure how to square this with Apple’s chief privacy officer stating in January 2020 that it was already scanning photos server-side. Are the criminals already avoiding iCloud, or is Apple’s matching not very effective?
Stefano Quintarelli (via Hacker News):
The point I try to make is that it will do little to protect children (while weakening users’ privacy and pushing criminals to hide better) but it will be used as an excuse to justify a tight control of the devices in order to perpetuate their apparent monopolistic power through the app store in a time when such behavior is under the fire of competition authorities.
The whole point of end-to-end encryption is to prevent the provider of the service to itself be coerced into giving off information about its users. Apple is building exactly the opposite of that.
Will you even know when the system is abused? The US government already forced companies into coercion while preventing them from telling their users that this is happening.
This is about an infrastructure which can be put to use for any and all of your data. It doesn’t matter what Apple claims it is limited to doing now. What matters is that this is a general purpose capability.
[…]
And what is incredibly stupid about this approach is that only technology-ignorant child-abusers will fail to turn off iCloud photo syncing, which at the moment is what the Apple system counts on. Everyone else gets spied on.
Aral Balkan (via Hacker News):
If Apple goes ahead with its plans to have your devices violate your trust and work against your interests, I will not write another line of code for their platforms ever again.
[…]
When I wrote The Universal Declaration of Cyborg Rights, I wanted to get people thinking about the kind of constitutional protections we would need to protect personhood in the digital network age.
This document serves to address these questions and provide more clarity and transparency in the process.
Apple’s FAQ is really disingenuous.
Why is Apple doing this now?
One of the significant challenges in this space is protecting children while also preserving the privacy of users. With this new technology, Apple will learn about known CSAM photos being stored in iCloud Photos where the account is storing a collection of known CSAM. Apple will not learn anything about other data stored solely on device.
Existing techniques as implemented by other companies scan all user photos stored in the cloud. This creates privacy risk for all users. CSAM detection in iCloud Photos provides significant privacy benefits over those techniques by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.
This answer makes no sense in light of the facts that Apple was already doing server-side scanning and that the photos to now be scanned on device are ones that Apple would have access to via the cloud, anyway. [Update (2021-08-10): See the update below.]
Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
Our process is designed to prevent that from happening.
The answer is clearly “yes,” because it relies on hashes, which Apple has not vetted; and depends on human review, which may not work as intended.
Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands.
This is not the right question. We don’t really care whether Apple is the one adding the hashes, but simply whether they can be added. And the answer to that is clearly “yes.” There are already non-CSAM hashes in the NCMEC database. Apple has no ability to “refuse” because it never even sees the images. It trusts the hashes that it’s been given by the government.
Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.
Apple has already compromised user privacy in response to Chinese law. If, say, US law compelled them to scan non-iCloud photos, what choice would they have but to accede? Would they stop selling iPhones? Have every single engineer resign? I don’t see how this is a promise any company could keep, even if it wanted to.
Yes, I fully believe that Apple will refuse when asked, and I don’t question their motives for why this feature should exist. The problem is that I don’t believe it’s remotely enough. Some states do not have a record of taking no for an answer, and when recent history shows impactful decisions, going against those same values and morals, that are the result of either successful pressure or regulatory capture, the situation recalls the words of a quite different Marx: “Those are my principles, and if you don’t like them… well, I have others.”
Apple isn’t “throwing a bone” to law enforcement. Apple is giving them an appetizer. When the biggest computer vendor in the US says it’s ok to put spyware on their own devices, this gives the green light to all legislators and agencies to start demanding everything they want.
Apple said that while it does not have anything to share today in terms of an announcement, expanding the child safety features to third parties so that users are even more broadly protected would be a desirable goal. Apple did not provide any specific examples, but one possibility could be the Communication Safety feature being made available to apps like Snapchat, Instagram, or WhatsApp so that sexually explicit photos received by a child are blurred.
Another possibility is that Apple’s known CSAM detection system could be expanded to third-party apps that upload photos elsewhere than iCloud Photos.
Update (2021-08-10): John Gruber and Rene Ritchie say that, actually, Apple’s servers have never scanned iCloud photo libraries for CSAM, only photos attached to certain messages stored on iCloud’s mail servers. Many sources reported Apple’s chief privacy officer saying at CES 2020 that photos uploaded to iCloud were scanned. However, some of these seem to be based on an article that has since been updated:
This story originally said Apple screens photos when they are uploaded to iCloud, Apple’s cloud storage service. Ms Horvath and Apple’s disclaimer did not mention iCloud, and the company has not specified how it screens material, saying this information could help criminals.
I have not found any official Apple statements saying what was scanned before.
In any case, this changes how I interpret Apple’s FAQ, as well as speculation for the future. If photo library scanning is new, Apple is not reimplementing a previously working system in a way that is potentially less private (since it could be easily tweaked to scan non-cloud photos). It also seems less likely to imply a switch to making iCloud Photos E2EE. It could simply be that Apple wanted to implement the fingerprinting in a way that took advantage of distributed CPU power. Or that it wanted to avoid having a server scanner that it could be compelled to use. This also explains why Apple only made 265 reports in 2020.
Apple’s Chief Privacy Officer seemed to say CSAM scanning of iCloud servers was already happening back in January 2020 and Apple’s Privacy Policy has allowed it since May 2019. However, it is now unclear whether iCloud server CSAM scanning has actually been happening.
Apple now seems to be telling media that server-based CSAM scanning will start when on-device scanning starts.
Or maybe it’s all done on-device when the old photos sync down from the cloud?
John Gruber (tweet):
I do wonder though, how prepared Apple is for manually reviewing a potentially staggering number of accounts being correctly flagged. Because Apple doesn’t examine the contents of iCloud Photo Library (or local on-device libraries), I don’t think anyone knows how prevalent CSAM is on iCloud Photos.
[…]
If the number is large, it seems like one innocent needle in a veritable haystack of actual CSAM collections might be harder for Apple’s human reviewers to notice.
Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver.
Perhaps feeling left out by the constant communication own-goals by Facebook, Apple set up the mother of all self-owns. It’s hard to think of a more massive communication fuck up, honestly. Again, because this topic is so big, so important, and so sensitive. Apple probably should have had an event, or at the very least a large-scale pre-brief with journalists and bloggers to talk through these issues.
[…]
Second, this is all more than a little ironic given the whole “backdoor” debate Apple forcefully stood up against when government agencies sought to force Apple to build in a way to get into iPhones. Tim Cook was adament that Apple had no way to do this, and should not build it. If they didn’t exactly just create a way, they created a huge loophole that officials are going to test like velociraptors against an electric fence. Until they find the weakness… That’s what Apple set up here. The thing they stood up against! Apple can say all the right things. They also have to abide by laws. And laws are man-made things. Which change.
Apple commit to challenging requests to expand their CSAM detection to other material. So did UK ISPs, but they lost in court and did it anyway. Will Apple leave a market if put in the same position?
How would Apple not be able to add things to the hash list/ change which list they use? NMEC would need to publish some root hash of their list and Apple would have to bind it into their client software in a way even they couldn’t change. Thats a tall order.
It is also deeply disappointing to see so many tech journalists make inferences for Apple when all of the pressure should be on Apple to answer the questions directly and on the record, instead of collecting concerns on background
Matthew Panzarino (tweet, TechCrunch, MacRumors):
I spoke to Erik Neuenschwander, head of Privacy at Apple, about the new features launching for its devices.
[…]
The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers, which we’ve never done for iCloud Photos.
[…]
Well first, that is launching only for U.S., iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the U.S. when they speak in that way, and the therefore it seems to be the case that people agree U.S. law doesn’t offer these kinds of capabilities to our government.
But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled.
He does not address Apple’s lack of ability to audit the hashes that it receives.
Update (2021-08-13): Nick Heer:
This note was appended one day after the Telegraph published its original report — that is, one day after it was cited by numerous other outlets. Unfortunately, none of those reports reflected the Telegraph’s correction and, because the Telegraph has a soft paywall and the title of the article remained “Apple scans photos to check for child abuse”, it is not obvious that there were any material changes to correct. Robinson’s Law strikes again.
Matthew Green (also: Edward Snowden):
People are telling me that Apple are “shocked” that they’re getting so much pushback from this proposal. They thought they could dump it last Friday and everyone would have accepted it by the end of the weekend.
Apple spent years educating the public on privacy for use as a marketing pitch and is now shocked that people care about privacy.
In a sense, it’s already too late. Apple hasn’t shipped the spyware yet, but Apple has already told the governments of the world that they will ship spyware in the operating system.
This is in stark contrast to what Apple said in the San Bernardino case.
Jokes aside, though, as engineers we regularly deal with complex systems that can be difficult for our users to understand. Having a hard time explaining how they work is one thing, but regardless of your position on this technology @Apple’s messaging has been unacceptable.
Their reluctance to clearly describe how the software works, their seeming inability to be straightforwards with the fact that it fundamentally detects CSAM using filters that they control and uploads it to them, is very concerning. This isn’t how you inspire trust.
“Encrypted” and “on device” and “hashed” are not magic words that magically grant privacy. You can’t say “nothing is learned about the content on the device” if you can take the vouchers it sends you and decrypt them–even if you are “sure” they are CSAM. That’s just incorrect.
Being better “compared to the industry standard way” does not mean the technology is automatically “private”. And when you say you’re better than the industry standard from the perspective of being auditable, don’t be in a place where you can’t verify you are doing any better.
You may be wondering why Apple includes this manual step of reviewing images before they are reported; the answer is U.S. v Ackerman. In this case, it was found that NCMEC is effectively a government actor due to the power that Congress has granted them. As a result, if NCMEC reviews a file, it is considered a 4th Amendment search; however, if Apple views the file and informs NCMEC of the content (conducting a private search that isn’t covered by the 4th Amendment), then NCMEC is free to view the file to confirm the accuracy of the report.
By manually reviewing the content prior to reporting, the search isn’t considered to be a violation of constitutional rights in the U.S., and thus can be used as evidence in court.
[…]
Based on how the system is designed, there doesn’t appear to be any need for the full image to be uploaded, only the Safety Voucher. Based on this design choice, it’s logical to conclude that the intention is to move beyond just iCloud into other areas.
[…]
Scanning images uploaded to iCloud for known CSAM is unlikely to have a notable impact. In a memo (discussed further below) to Apple employees from Marita Rodriguez, the Executive Director of Strategic Partnerships at NCMEC said, “…I hope you take solace in knowing that because of you many thousands of sexually exploited victimized children will be rescued…” - which sounds great, but is entirely unrealistic. This scanning system only looks for known CSAM that has been reported and added to the hash database; this system targets those collecting and trading CSAM. It’s not targeted to those producing new CSAM. While putting the criminals that traffic in this awful material in prison is a laudable goal, the impact is unlikely to resemble the goals NCMEC has expressed.
[…]
The fact that NCMEC hasn’t issued an apology and clarification is telling; they are doing little to work with privacy advocates to find solutions that meet these complex challenges, and instead attack and demean.
One can not reconcile these two things: 1.) Apple rolling out an automated, warrantless, opt-out surveillance tool to all US iCloud customers — and 2.) iPhone owners around the world having arbitrary data pushed to their devices by powerful nation-state adversaries who want them ruined.
The Pegasus story does not have a bookend. As it stands, it is very reasonable to assume that a hacker could push arbitrary data to your phone, including pictures. We have proof (and acknowledgement from Apple) that this is still happening. Because of the broken security of Apple devices, it is irresponsible to be rolling out an automated surveillance system, and frankly – exceedingly arrogant.
[…]
Apple’s CEO Tim Cook said at a Fortune event in 2017, when asked about its compliance with China’s censorship and problematic laws: “Each country in the world decides their laws and their regulations. And so your choice is: Do you participate, or do you stand on the sideline and yell at how things should be? You get in the arena, because nothing ever changes from the sideline.” Apple has been “in the arena” for well over a decade now, time for a scorecard.
But just because Apple has done its due diligence and made some careful choices in order to implement a tool to stop the spread of heinous material doesn’t mean that it’s off the hook. By making our phones run an algorithm that isn’t meant to serve us, but surveils us, it has crossed a line. Perhaps it was inevitable that the line would be crossed. Perhaps it’s inevitable that technology is leading us to a world where everything we say, do and see is being scanned by a machine-learning algorithm that will be as benevolent or malevolent as the society that implemented it.
Even if Apple’s heart is in the right place, my confidence that its philosophy will be able to withstand the future desires of law enforcement agencies and authoritarian governments is not as high as I want it to be. We can all be against CSAM and admire the clever way Apple has tried to balance these two conflicting needs, while still being worried about what it means for the future.
EFF (via Hacker News):
For example, the Five Eyes—an alliance of the intelligence services of Canada, New Zealand, Australia, the United Kingdom, and the United States—warned in 2018 that they will “pursue technological, enforcement, legislative or other measures to achieve lawful access solutions” if the companies didn’t voluntarily provide access to encrypted messages. More recently, the Five Eyes have pivoted from terrorism to the prevention of CSAM as the justification, but the demand for unencrypted access remains the same, and the Five Eyes are unlikely to be satisfied without changes to assist terrorism and criminal investigations too.
[…]
All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, the adoption of the iPhoto hash matching to iMessage, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. Apple has a fully built system just waiting for external pressure to make the necessary changes.
You wouldn’t think a US company could be forced to scan all of it’s customers data, but Yahoo was. Don’t make the same mistake Apple.
Been there, didn’t do that, got the t-shirt.
Here’s an op-ed @alexstamos and I co-authored about the risks of Apple’s content scanning plan. It’s short and easy to read, and I’m hoping it makes the issues digestible to non-technical people.
[…]
My personal proposal to Apple is to limit this tech to photo sharing rather than whole libraries, and release their hash function design. And ideally wait until researchers have time to vet it before launching to 1bn users.
There’s a crucial difference between possessing photos and sharing photos. The former is expected to be private, the latter not. This is why iCloud and Facebook are not comparable.
This issue is nuanced and Apple’s decisions involve concessions. Personally, I think Apple have done well here. They probably could have handled the communication surrounding the announcement better, but the actual functionality and policy decisions are reasonable.
[…]
You have to assume that privacy issues are a key reason why Apple has historically been so lax in this department. It’s not that Apple has sympathy for the people spreading child pornography. Why right now? That is still unclear. Perhaps, behind closed doors, someone was threatening lawsuits or similar action if Apple didn’t step up to par soon. Either way, it’s crunch time.
[…]
The weakest link in the chain on the technical side of this infrastructure is the opaqueness of the hashed content database. By design, Apple doesn’t know what the hashes represent as Apple is not allowed to knowingly traffic illicit child abuse material. Effectively, the system works on third-party trust. Apple has to trust that the database provided by NCMEC — or whatever partner Apple works with in the future when this feature rolls out internationally — does only include hashes of known CSAM content.
All the conversations the community has been having are mirrored inside Apple; I think it’s an understandable worry that Apple is prepared to sell out all of its users despite knowing — and informing them — predators can avoid the system by turning off iCloud Photos. No wins here
Joseph Menn and Julia Love (Hacker News, MacRumors):
A backlash over Apple’s move to scan U.S. customer phones and computers for child sex abuse images has grown to include employees speaking out internally, a notable turn in a company famed for its secretive culture, as well as provoking intensified protests from leading technology policy groups.
Apple’s senior vice president of software engineering, Craig Federighi, has today defended the company’s controversial planned child safety features in a significant interview with The Wall Street Journal, revealing a number of new details about the safeguards built into Apple’s system for scanning users’ photos libraries for Child Sexual Abuse Material (CSAM).
I see the Apple PR line on photo scanning is that you don’t understand what’s going on. Your tiny brain cannot comprehend the splendor of this technology.
Apple Inc. has warned retail and online sales staff to be ready to field questions from consumers about the company’s upcoming features for limiting the spread of child pornography.
In a memo to employees this week, the company asked staff to review a frequently asked questions document about the new safeguards, which are meant to detect sexually explicit images of children. The tech giant also said it will address privacy concerns by having an independent auditor review the system.
Apple today shared a document that provides a more detailed overview of the child safety features that it first announced last week, including design principles, security and privacy requirements, and threat model considerations.
[…]
The document aims to address these concerns and reiterates some details that surfaced earlier in an interview with Apple’s software engineering chief Craig Federighi, including that Apple expects to set an initial match threshold of 30 known CSAM images before an iCloud account is flagged for manual review by the company.
[…]
Apple also said that the on-device database of known CSAM images contains only entries that were independently submitted by two or more child safety organizations operating in separate sovereign jurisdictions and not under the control of the same government.
[…]
Apple added that it will publish a support document on its website containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature.
This bit about multiple organizations is interesting, but it raises additional questions. Apple previously said that the feature will start out as US-only. So they’re only going to report images to NCMEC and only images that are in the intersection of NCMEC’s database and some other foreign database? That would seem to drastically reduce the chances of finding legitimate matches, unless the organizations are all working together to exchange data, which of course raises more questions. And, if you’re in the US, does that mean Apple could be reporting images to NCMEC that are not even in the US database, but rather in two separate foreign ones?
See also:
Previously:
Update (2021-08-18): Joseph Menn and Stephen Nellis:
Apple Inc said on Friday that it will hunt only for pictures that have been flagged by clearinghouses in multiple countries.
That shift and others intended to reassure privacy advocates were detailed to reporters in an unprecedented fourth background briefing since the initial announcement eight days prior of a plan to monitor customer devices.
I’m glad that Apple is feeling the heat and changing their policy. But this illustrates something important: in building this system, the only limiting principle is how much heat Apple can tolerate before it changes its policies.
I’m not sure this is actually a shift, as it was hinted at in the original documents Apple released.
Wait, so Apple wanted to ensure that the Child Sexual Abuse Material (CSAM) tech is understood to be totally separate from the iMessage photo scanning feature, and yet they’re calling it “Communication, Safety, And Messages”? 🥴👏
This whole thing would not be happening if Katie Cotton were still in charge of corporate communications.
I’m not even kidding here: Apple screwed up the messaging on this so completely that I wonder if a certain key person or two is on an extended vacation or personal leave and wasn’t around to oversee this.
It is also striking how difficult it is for even a media-trained executive to clearly articulate these features. In Stern’s interview, there are several moments when she has to pause the interview to explain, in layperson terms, what is happening with the assistance of some effective graphics. I appreciate Stern’s clarifications and I understand them to be accurate, but I wish those words came from Apple’s own representative without needing interpretation. I think Apple’s representatives are still using too much jargon.
Issues with the scope of things that CAN be done with some power cannot be resolved by voluntary choices made by the holder of that power. So long as they hold that power, they can revise their choices at any time, and they can be compelled to do things at any time.
Eva:
I’d like to take this moment to make it clear to poor Craig that no, I don’t misunderstand Apple’s plans to check photos in iCloud against NCMEC’s database of CSAM. It’s well-meaning but it’s also creating a mechanism that Apple will be forced to use for other things.
The company who makes my CPU, RAM, and hard drive don’t have any right or privilege to see my information, nor does the company who provides the locks on the door of my home. A smartphone is no different. This is not a radical position.
While I agree that this is a major privacy issue and that alone should be sufficient to call for a halt to this, I am surprised I’m not hearing more property & property rights arguments: this is Apple assigning work to user owned devices for jobs which do not benefit the user.
Apple truly screwed this up in a way that is almost beyond comprehension. All their effort on establishing an image of respecting privacy out the window.
I seriously hope they reconsider the entire thing, but knowing Apple there’s no chance at all.
Eva:
In their new FAQ, Apple says they will refuse govt requests to use their CSAM-scanning tech to scan for other forms of content. How exactly will they refuse? Will they fight it in court? Will they pull out of the country entirely? This is not a time to get vague.
Central to its case is for us to trust Apple not to use this same mechanism for other purposes. When we can’t even trust Apple to tell us what it has changed on our own Macs, we should be rightly suspicious. If it is to work at all, trust must work both ways: if Apple wants our trust, it has to trust us with the knowledge of what’s in a macOS update.
All I want to do here is convey what I think is a strong case against co-opting personal devices for law enforcement purposes, so that people who have done nothing wrong and don’t have anything to hide can see where we’re coming from when as a tech community we push back on these things.
[…]
Apple will certainly comply rather than withdraw from the markets, as they have done so far in China. It is likely that no more powerful tool for surveillance authoritarianism has ever been conceived by humans.
Member of the German parliament, Manuel Höferlin, who serves as the chairman of the Digital Agenda committee in Germany, has penned a letter to Apple CEO Tim Cook, pleading Apple to abandon its plan to scan iPhone users' photo libraries for CSAM (child sexual abuse material) images later this year.
Sign the petition and email Apple leadership to tell them to drop these plans and recommit to never opening any sort of backdoor to monitor our communications.
Update (2021-08-21): Malcolm Owen (via Kosta Eleftheriou, MacRumors):
“It’s the reality. If you put back doors in a system, anybody can use a back door. And so you have to make sure the system itself is robust and durable; otherwise you can see what happens in the security world,” said Cook.
Update (2021-09-08): Ben Lovejoy (Hacker News):
Apple confirmed to me that it has been scanning outgoing and incoming iCloud Mail for CSAM attachments since 2019. Email is not encrypted, so scanning attachments as mail passes through Apple servers would be a trivial task.
Apple also indicated that it was doing some limited scanning of other data, but would not tell me what that was, except to suggest that it was on a tiny scale. It did tell me that the “other data” does not include iCloud backups.
Christina Warren returns to the show to discuss Apple’s controversial child safety initiatives, the tumultuous summer of Safari 15 beta UI designs, and a bit more on MagSafe battery packs.
Gordon Kelly (via Hacker News):
iPhone users have put up with a lot in recent months but the company’s new CSAM detection system has proved to be a lightning rod of controversy that stands out from all the rest. And if you were thinking of quitting your iPhone over it, a shocking new report might just push you over the edge.
John Koetsier (via Hacker News):
Apple fraud executive Eric Friedman told colleague Herve Sibert that Apple is the greatest platform for distributing child pornography. The comment sheds light on why Apple is now pursing a controversial program and automating checks for child porn on customers’ phones and in their messages.
In preceding messages, Friedman writes about a presentation the two managers have been working on to be shown to Eddy Cue later that morning. Friedman shows a slide describing features within iOS that have revealed fraud and safety issues. The two relevant concerns are reports of child grooming in social features — like iMessages and in-app chat — and in App Store reviews, of all places. Subsequent messages indicate that this is partly what Friedman was referring to.
Edward Snowden (via Hacker News):
You might have noticed that I haven’t mentioned which problem it is that Apple is purporting to solve. Why? Because it doesn’t matter.
Having read thousands upon thousands of remarks on this growing scandal, it has become clear to me that many understand it doesn’t matter, but few if any have been willing to actually say it. Speaking candidly, if that’s still allowed, that’s the way it always goes when someone of institutional significance launches a campaign to defend an indefensible intrusion into our private spaces. They make a mad dash to the supposed high ground, from which they speak in low, solemn tones about their moral mission before fervently invoking the dread spectre of the Four Horsemen of the Infopocalypse, warning that only a dubious amulet—or suspicious software update—can save us from the most threatening members of our species.
[…]
Apple’s new system, regardless of how anyone tries to justify it, will permanently redefine what belongs to you, and what belongs to them.
[…]
I can’t think of any other company that has so proudly, and so publicly, distributed spyware to its own devices—and I can’t think of a threat more dangerous to a product’s security than the mischief of its own maker. There is no fundamental technological limit to how far the precedent Apple is establishing can be pushed, meaning the only restraint is Apple’s all-too-flexible company policy, something governments understand all too well.
Previously:
54 Comments RSS · Twitter
There is no reason for the matching to be done on-device unless the next step is for Apple to scan and potentially upload your local non-iCloud images.
Apple has always scanned photos uploaded via iCloud for child pornography. All Apple is doing is moving the scanning from the server to the client. They are not scanning all photos taken by the client device, nor are they scanning photos stored on the client if iCloud is disabled. The CSAM database is for known child abuse images, not little Bobby walking around naked after a bath or a family spending time at a nudist colony.
>Would you rather give Apple all your photos or risk losing them?
Stop using a false dilemma fallacy.
Perhaps folks could use a service that doesn't hold the encryption key. Most consumers are better off using a consumer service that does holds the key because they are not smart enough to manage their security. However, if you value privacy and not the faux privacy Apple peddles, you need to encrypt before uploading. There are providers offering zero-knowledge, either through a shrink-wrap client or roll-you-own solution.
That said, I agree with the sentiment that this isn't needed if Apple is already doing server side scanning. It could very easily be extended for evasive monitoring of all kinds of activity deemed antisocial and unpatriotic.
@Tom I don’t think it’s really a question of “not smart enough.” Which zero-knowledge service is there that syncs photos between the iOS and Mac Photos apps?
> The CSAM database is for known child abuse images, not little Bobby walking around naked after a bath or a family spending time at a nudist colony.
Are you saying the database is limited to actual hardcore pornography and not just nudity? Because otherwise at some point there is a chance that “little Bobby walking around naked after a bath” ends up looking too much like an existing image and gets flagged. (Even more so with the family at a nudist colony, because then you presumably have nude adult[s] in the picture, too.)
> It’s implied but not specifically stated that they are not scanning the contents of iCloud Backup (which is not E2E)
Apple did later confirm this to MacRumors.
Apple is full of shit. This machinery may have been built for fighting child exploitation, but it's already been leveraged for years to fight "terrorism." And now the entire industry has joined hands to extend the moral crusade against undesirable political views.
It is particularly notable that this is the exact opposite of Apple's technology approach on securing devices. While in hardware & software, they have spent over a decade locking every possible exploit so that they *can't* be compelled to search devices, with this radical shift Tim Cook's Apple has built machinery that can be trivially employed to target literally any image content. They will be legally compelled to do so. They know it. And they built it anyway.
Hopefully iOS 15 adoption craters, particularly in light of the fact that they will be shipping security updates to iOS 14 for eveyrone.
> particularly in light of the fact that they will be shipping security updates to iOS 14 for eveyrone.
I do wonder about the timing of that. Part of me wonders if it was intentional on Apple's part, knowing this was coming and what the potential backlash would be.
"Apple has always scanned photos uploaded via iCloud for child pornography. All Apple is doing is moving the scanning from the server to the client"
That's a pretty big change, though.
Considering that the EU and the UK are about to mandate similarly intrusive backdoors, one readily understands why Apple embarked on this project. Since E2EE is well on its way to being outlawed in major “democratic” blocs and countries, it would be ludicrous of Apple not to stand ready with the soon-to-be-mandated backdoors. (There is a lot of heated debate around these issues in both the EU and the UK still, but the signals are not particularly encouraging, as child safety, copyright, and anti-terror organisations have suddenly aligned their lobbying cannons.)
The question that bears asking is why did Apple suddenly decide to release this technology? Presumably some pressure was being applied that caused them to decide that now was the time to give an inch for fear that a foot might otherwise be taken. I can understand their preparing this system, but not their releasing it at this point, unless there were very compelling political reasons that made the inevitable PR hit preferable. Apple are no fools when it comes to brand management and PR.
Some have mentioned it might be a tactical move along what Apple see as a path to the best possible privacy compromise. If so, it feels like a very weak and extremely disappointing reason. My fear and hope is that screws were being applied in the background. This is obviously not a rushed release, but the timing remains a mystery.
As for the actual announcements, can we bring “Hell Froze Over” back?
Something that just occurred to me...is this groundwork being laid for E2EE iCloud Photo Library*? Because the thing is, they don't need the user's device to scan anything in iCloud Photo Library right now. They could just go do it. So why go through this whole dog-and-pony show of vouchers, and thresholds, and unlocking the data for Apple, and so on when none of that is necessary whatsoever?
Apple has never promised that photos in iCloud Photo Library are stored in such a way that Apple doesn't have access to them. And cloud storage providers have, for years, scanned for CSAM.
Does anyone know if Apple was actually doing that before, specifically for iCloud Photo Library? (I know someone mentioned iCloud Disk, but I'm not sure if that would encompass iCloud Photo Library.) If so, this suddenly makes massively more sense that when they actually start doing it, they follow their "on-device" mantra instead of scanning in the cloud.
If they've been doing it in the cloud all along, then this shift makes no sense unless it is a stepping stone to E2EE. (And if they've been doing it in the cloud all along, all of the privacy advocates' fears of mission creep down the road don't really mean anything, because that same logic could have been applied to scanning in the cloud all along, and that hasn't happened, to our knowledge.) In which case, announcing this without also announcing E2EE leaves a lot of unanswered questions that causes PR problems in the meantime.
* Do we have an agreed-upon acronym for iCloud Photo Library? Because it's terribly exhausting typing it out every single time.
@Kevin They were doing it before on the server (see today’s updates above), and it does indeed make sense (we can hope) that this is part of adding E2EE, but if so it’s a curious self-own that they announced this part first.
I see people call it “iCPL.”
We know that the days to come will be filled with the screeching voices of the minority.
Our voices will be louder.
Wow.
@Michael So they were doing it before, and they had the same pressures as before to expand it to non-CSAM things, which they have (as far as we know) withheld, and that has not changed.
So then I really don't understand the uproar here. It is pre-scanning things which are already headed to iCloud, where Apple was scanning them anyway. As noted in your update, they already scan other things in response to court orders. It's optional, in that you don't have to use their cloud service (which was already scanning the things anyway).
It sounds like it's actually better in that Apple maybe doesn't have to do server-side scanning anymore, cutting off a potential avenue for abuse?
From today's update:
> [Nick Heer:] And we must trust it because it is something Apple will be shipping in a future iOS update, and it will not have an “off” switch.
This is not true. The off switch is not using iCPL.
Idea for Pegasus II:
1. Use "child protection" hash database to find horrible pictures on dark web
2. Use zero day hack to place such images on a target's device, preferably in a place Apple sees but the user doesn't.
Goodbye annoying person! Enjoy your stay in a prison! So much more effective than plundering female arab journalists' personal photos in "compromising" bikinis, and dumping them on the web to discredit them.
This "screeching minority" won't ever be buying another iPhone or Mac again. I prefer a worse UI to being subjected to this kind of risk.
Isn't this a classic example where the criminals will just use something else besides iCPL? (they probably already are) And the rest of us will have our privacy violated for no good reason. And I'm sure now we will see every other service implement nude scanning too.
@Kevin I think what’s changed is that now that it runs on-device it’s a small step to extend it to non-iCPL photos. And, given that no E2EE was announced, people are thinking maybe that was why they went to so much effort to redo a system that was already working.
How is there more potential for abuse if they have the photos on the server, in any case?
There’s no discrete switch. I don’t think turning off iCPL is practical for most people.
@Ben Yes, that’s what Jeff Johnson was saying above. I’m not sure. Maybe these are not sophisticated criminals, and so they won’t be aware they could be caught?
Also, if the detection is being done on-device (and might only be for matching known CSAM images?), then why does it matter if iCPL is enabled or not? It seems like the cloud isn't actually involved. And is this scanning everyone's photos all the time, or only when enabled to protect a minor associated with a family account?
@Ben I think it’s scanning everyone’s photos all the time. The minor account thing is for sending iMessages (different system not using hashes, and whether or not you are using iCPL).
@Ben G:
> And I'm sure now we will see every other service implement nude scanning too.
Every other service (*including iCPL*) already does this. Literally the only difference here is the scanning is happening on-device prior to the file being transported to iCPL (and associated vouchers and whatever, but it's all part of the on-device scanning system as opposed to the current server-side scanning system).
@Michael:
> How is there more potential for abuse if they have the photos on the server, in any case?
If everything is happening server-side, there is more opportunity for files to be "found" that weren't actually there (i.e. placed directly on the server by a bad actor), whereas if the scanning is on-device, presumably there would be a trail of some kind on the device registering that the photo originated there. I may be offbase with that, though.
> There’s no discrete switch. I don’t think turning off iCPL is practical for most people.
To the extent one is insistent on using iCPL, it was never an option not to participate in this scanning that has been happening for years. So to say that now that it's on-device it's somehow different that it's happening (setting aside whether that leads to something else in the future, slippery slope, etc.) and that there needs to be an off switch, other than not using iCPL, is a red herring.
Meanwhile Jim is asking in the hallway: "Who did this? Bob, was it you?"
Bob: "Yup."
"So we infiltrated NCMEC?"
"Oh, we were there from the start, since the 80ies. A third of the people are ours."
"So, Apple will run surveillance on all iPhones?"
"It's even better: they are tracking down people for us. We just add hashes to the database and they report back to us with exact matches. Not only on iPhones. On iPads, Macs, all of their stuff. Can't wait for their Apple Car."
"Amazing!"
"And the cherry on top? They can still promote themselves as a privacy company. Apparently they'll introduce some 'encrypted' iCloud BS next month. People will only use Apple services and ignore everything else that would give them real E2E encryption."
"Because they trust them."
"Yup."
"So people either use Google/Facebook and everything is visible to us or they use Apple and have us sitting on their local data?"
"Correct."
"And i heard Snowden is furious?"
"Yeah, he knows what's going on. But nobody cares."
"Nice!"
In general I believe it would be better for everyone if Apple would not try to control:
- What you can install on your device;
- How you pay for what you install or use on your device;
- What you do or how you use your device;
I hear all the "good reasons" but I increasingly see bad intention in disguise.
The next version of this brave new world should send every hash to Apple. It's only a few bytes, no one will notice! That way, when a new image is added to the CSAM database, you'll know who made it. This improve the motherland's loyal police-force's ability to identify and eliminate trouble makers! Apple, designed in California, for North Korea!
And in the year of our Lord 2021, an assembly of evil men, known by the apple emblazoned on their tabards, threw out that ancient law, proclaimed by the great Cicero himself: Quid enim sanctius, quid omni religione munitius, quam domus uniusquisque civium?
I’m still confused if 1) the scanning happens on my iPhone then 2) what does having iCPL enabled have anything to do with it? This seems like an arbitrary distinction.
@Ben G
1) Yes, the scanning happens entirely on device 2) only for images that are about to be uploaded to iCPL. It's not an arbitrary distinction--there's some legal requirements around scanning what they host on their servers for CSAM.
Daring Fireball says
Fingerprinting is not content analysis. It’s not determining what is in a photo. It’s just a way of assigning unique identifiers — essentially long numbers — to photos, in a way that will generate the same fingerprint identifier if the same image is cropped, resized, or even changed from color to grayscale. It’s not a way of determining whether two photos (the user’s local photo, and an image in the CSAM database from NCMEC) are of the same subject — it’s a way of determining whether they are two versions of the same image. If I take a photo of, say, my car, and you take a photo of my car, the images should not produce the same fingerprint even though they’re photos of the same car in the same location. And, in the same way that real-world fingerprints can’t be backwards engineered to determine what the person they belong to looks like, these fingerprints cannot be backwards engineered to determine anything at all about the subject matter of the photographs.
It's magic! A content-independent way of identifying even modified pictures, that won't confuse one person's photo with another person's photo of the same thing taken at the same location! Very likely story.
To all those people who this is fine, think of this as yet another equal-opportunity sword of Damocles. Even Damocles, the flatterer, grokked the danger. What Apple claims is not always true. For instance, Apple claims its OS is secure. Yet NSO's Pegasus proves it isn't. We've been frogs, slowly boiled, too slowly to notice the rising temperature, and hop out.
> It's magic! A content-independent way of identifying even modified pictures, that won't confuse one person's photo with another person's photo of the same thing taken at the same location! Very likely story.
That's not magic...that's just hashing with slightly fuzzy matching (or, more accurately, slightly fuzzy hashing with matching). He's not wrong that it doesn't know what's in a picture, just whether a picture is the same as another picture with exceptions of color saturation and size. His example of the cars would be better put as two photos of his car taken from even slightly different angles, as that is the point--if you take a picture of someone who is standing in the same place, and then someone else standing in the same spot at the same time takes a picture of the same person, they're probably going to match. If the two photographers are not able to stand inside of each other, though, there's going to be enough difference that it wouldn't match.
Also, for the record, we're talking about CSAM. If you are trying to distinguish between two slightly different photos taken at the same time of the same child being abused, it's not a relevant difference. What is a relevant difference is if your child happens to look something like a child in a known CSAM photo, and Apple is saying that based on their testing, the chance of that both happening and then matching is infinitesimally small. (Also helped by the fact that your child's photo should not be of an abusive situation, so even if the children have similar facial features or builds, it wouldn't be a match regardless.)
The point here is that Apple is matching against known photos, not using AI to figure out what activity is occurring in a picture for purposes of identifying it as previously-unknown CSAM, so there's a lot less likelihood of something going wrong.
> What Apple claims is not always true.
Even Gruber would agree that's correct.
> Apple claims its OS is secure
They've never said "iOS is completely secure with no faults whatsoever," because they know as well as everybody else that is a load of bunk, and they would be proven wrong every time they issue a security point update. They do believe the OS is more secure for users than others because of some decisions about encryption and data siloing, and I think deservedly so.
> Yet NSO's Pegasus proves it isn't.
NSO's Pegasus proves that with enough determination, anybody can break into anything. The uproar there was not as much that iOS is suddenly Swiss cheese (it's not), but that a company that operates out in the relative open was weaponizing security flaws.
@Kevin Maybe the way the CSAM image matching worked would be clearer if Apple hadn’t bundled it with the announcement of the iMessage feature that does use AI to determine what’s going on in the photo.
@Michael I don't disagree with that, though I think it's not helped by the fact you had researchers leaking this ahead of the announcement with statements that ended up being at least partially incorrect in their own right, because those were the initial headlines before Apple had announced anything at all.
@Kevin I’m unclear on exactly what happened there, but it sounds like the ones who “leaked” it had second-hand information because they were not chosen to receive the embargoed information, i.e. Apple may have known ahead of time that they would be critical.
Kevin Schumacher, this statement is wrong, and no, physics does not require you to occupy the same place at the same time to take the same photo:
If I take a photo of, say, my car, and you take a photo of my car, the images should not produce the same fingerprint even though they’re photos of the same car in the same location.
Any hash uses the content of the image to match it to other images. If it were to do otherwise, as Daring Fireball's piece says, it would be magic.
You also created a straw-man "this algorithm finds new content to ban", and then you knocked it down.
@Fabulist If there are two photos taken of the same thing from even slightly different angles, when hashed, they should produce different hashes, because the underlying data is different. If they don't, the hashing tool is either broken or intentionally designed to assign the same hash to different but similar inputs. As a different example, a text document containing 1000 As and a text document containing 999 As and 1 B have different hashes unless something is being overridden somewhere.
Apple is describing a system that is broken, in terms of how hashing traditionally works, because either the hash itself or the matching process is fuzzy and allows for matches even when the ones and zeroes are not all identical.
As far as the "hash uses the content of the image," and Gruber supposedly describing a "content-independent" way of hashing that is "magic," I can't tell if you're really not understanding what he's saying or you're being purposely obtuse.
You are using "content" to mean something different than what Gruber was. You are using it to refer to the data in the image file, which is what the computer is using to create the hash, whereas Gruber is using it to refer to what a human being sees when they look at the image rendered on a screen, i.e. a car, or a boat, or a child--the subject matter.
Gruber is correct that the hashing process, as described by Apple, cannot turn around and spit out a report on the subject matter of the photo (no "content analysis" of "what is in a photo"). He is also correct that two different images cannot, by definition, have identical hashes unless something else is at play, which in this case is changes in resolution, crop, saturation, and image dimensions.
As far as my "straw man", we appear to have very different definitions of straw man, too. I directly rebutted your actual assertion before additionally noting that in the context of your distrust of Apple and your use of the phrase "content-independent" (which I now realize your use of the word "content" means something different than the passage you were arguing against), that this process is content-independent because it is not designed to understand the subject matter of the photo, simply whether it matches a known image.
I'd say that many NGOs can start creating a white list images, with hashes.
When Apple or alike gets set of hashes from a new entity, that claims that it's CSAM, those can be checked against such white list.
The beauty that all the images in the white list can be examined in the open.
Then if the entity is caught with a white listed image hash posing as CSAM can be disqualified from the system.
> If there are two photos taken of the same thing from even slightly different angles, when hashed, they should produce different hashes
That is not how image descriptors work. Indeed, making them somewhat perspective/angle independent is part of the point of having image descriptors. Otherwise, the system would be defeated by simply distorting the picture a little. The fact Apple uses image descriptors produced by a neural network trained by adversarial training does not change that.
https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf
> As a different example, a text document containing 1000 As and a text document containing 999 As and 1 B have different hashes unless something is being overridden somewhere.
That also seems unlikely to me. If this point were correct, all one would need to defeat this system is insert a frame around the image or add a watermark.
Yes, by image content I mean the image data, not the image meta data, nor any interpretation of the image. By "new content to ban" I meant media in a more generic sense. I didn't even notice the same word occurs in both phrases.
Since meta-data can easily be changed or removed, it is the image data that is used to identify the picture. Since Gruber said that taking same picture at the same location of the same car results in different hashes, it is clear to me that he cannot be referring to what you call "content analysis", but instead to the image data.
Only magic could produce hashes which can distinguish between two pictures of the same car taken at the same location, yet which can survive other perturbations of the image.
I am assuming Gruber means the camera is in the same location for both photos. Otherwise I see no point in bringing up location. I suppose he could mean parking spot, but why would that be relevant?
Anyway, you are right, Apple has now lost my trust. The last time I had to set up a new iPhone for work it automatically turned on iCloud which I did not sign up for on my existing phone. It copied all the pictures on my personal phone to this online account "to save space". I never asked it to do that. Nor did it warn me that it wanted to pilfer my photos. I only noticed when, by accident, I opened the photo feed of my work device, only to see my wife's face looking at me. Since then I have seen others complain about this same feature. I consider this a dark pattern, something Steve Jobs said Apple did not do:
https://www.youtube.com/watch?v=39iKLwlUqBo
I now consider Apple, Facebook, & Google to be equally untrustworthy. Technology was supposed to give me control. Not give control of me to others.
@Fabulist Yes, I have seen other reports of iCPL being auto-enabled during setup/migration despite being turned off before. I don’t think Apple should be changing the setting after people have already opted out. That is certainly not respecting privacy.
Someone who works in this field says:
https://www.hackerfactor.com/blog/index.php?/archives/929-One-Bad-Apple.html
Summary:
* the NCMEC database contains false positives
* the PhodoDNA hash algorithm is reversible (Apple's algorithm might not be)
* the 1 in a trillion false positive rate is estimated, not tested, because Apple could not have tested a trillion pictures.
* what Apple is doing likely violates search and data collection laws.
@Fabulist Apple claims 1 in a trillion accounts, which is even stronger than claiming 1 in a trillion pictures.
@Fabulist
> what Apple is doing likely violates search and data collection laws.
There are two parts to what the person you linked said. First is Apple's manual review of suspected CSAM before forwarding it on, and whether this violates laws regarding distribution and possession of CSAM. I suspect Apple's legal department has cleared this on the basis that it is not a foregone conclusion that what they are viewing is CSAM (in other words, despite their public rhetoric, it's possible the hash matching got it wrong), and so to the extent it ends up being so, they evade violating the law because they're not knowingly distributing or possessing CSAM. The fact that they're doing this in partnership with NCMEC seems to indicate NCMEC approves of the way they're going about it. While that does not grant complete certainly of the lack of any violations of law, it carries a lot of weight given what the subject is.
The second is Apple's interactions with its users. This is covered by the terms of service that the user agrees to. Apple is not bound by the Fourth Amendment or any other laws directed at state conduct, nor can they, by definition, violate such laws. So long as the user is on notice of what is happening and what is possible (CSAM scanning and subsequently potentially "unlocking" suspected pictures for Apple), there is no violation of anything here, as far as I can tell. Mentioning Apple's license agreement is a red herring; that has nothing to do with what rights may or may not be conferred on Apple through the ToS for using their services (because this is limited to photos that the user affirmatively intends to be uploaded to Apple's servers). And their ToS explicitly carves out exceptions for Apple to scan photos for, among other illegal things, CSAM.
The landlord example is a red herring, too. Unlike laws dictating when and how a landlord may enter a unit they own and are renting to someone, there are no laws in the US (to my knowledge) governing whether a service provider can scan their customers' device contents, especially content intended for upload to a cloud service, and especially when the customer is on notice that it will happen.
@Michael bit confused here: can't you use any number of apps that will sync the contents of your photo library to some other place? Or is that an iOS-only entitlement? I know the sync process is handicapped, but in principle the sync could be mediated without iCloud.
Even if these algorithms worked as advertised, would they actually reduce child porn?
I'm guessing that there must be 2 kinds of people who traffic in such things:
* the evil people who make this shit
* the people who have an urge to watch this stuff.
Shouldn't the people who make this stuff (and hurt children) be the ones that law enforcement concentrates on? This does the opposite: it finds consumers of old photos but not the producers of new photos (If I understand correctly, new photos aren't detected). It also improves the risk-reward trade-off of consuming new photos, paradoxically possibly creating more demand...
Detecting things on the iPhone also creates a new danger: the people who distribute this sort of stuff could maybe use their iPhone to know when their material is now targeted by NCMEC... and thus hide better. This cannot happen if the scanning happens on a server.
@Sebby PhotoKit is for Mac, too, but as far as I’m aware it is not sufficient to build an alternate syncing method. Which apps are you referring to?
@Someone I guess the theory is that, like with drugs, criminalizing possession will reduce demand and therefore production. But you make an interesting point that in the digital realm this system could create perverse incentives by rewarding novelty. I think it’s more likely, though, that it would just move such illicit use from iCloud to somewhere else.
I just thought this prophecy might be a useful addition to this discussion. I just stumbled across it a bit ago.
https://www.lawfareblog.com/law-and-policy-client-side-scanning
It would be interesting to see how the antitrust cases against Apple unfold from now on. Can it be that Apple caved in to governments to undermine privacy in exchange for getting favourable treatment, and the only way to announce that without getting a huge backlash from the general public is to introduce it under the veil of "we want to help children"? I know it a conspiracy, but you never know these days.
What I do not understand why Apple thinks it will be any useful.
They clearly state that they do not scan all files (yet) and not all messaging platforms (yet). So for any bad people is enough to just stop using iCloud Photos and Apple Messages. They all would do it before those systems go live. Maybe few very stupid ones get caught, maybe.
Then it will be a lot of processing done on all the devices finding only false positives.
There is no way to force those bad child abusers to use iCloud Photos.
So this measure is by all means useless, yet it creates such big risks for all sorts of vectors from customer backlash to Pegasus X planting special images on targeted phones.
Can't they see that?
@Michael Probably right; I don't use Photos except to get images out of my library once I've finished with them. All the cloud providers have photos support, WinZip will let you compress/export them, various NAS apps let you back them up, etc. But perhaps that only works for the camera roll and won't let you modify the device library to the same extent as iCloud. A shame if so; seems clearly to be another anti-competitive lock-in strategy.
This CLIENT side scanning where Apple is taking more of *my* device to do scanning for THEIR agenda infuriates me. Hey Tim Apple do whatever you want on the SERVER side at your site with content on YOUR machines.
Don’t decrease the life of MY DEVICES with YOUR agenda. Leave my iron & my fleet out of YOUR agenda and EXPLETIVE off.
Once again Apple is devoting more of users’ devices to THEIR agenda and the only option is to decouple from Apple. I don’t care how much CPU power Apple adds to compensate those cycles belong to *me* and not Apple. If I use iCloud photos then do the ML matching and render time on your OWN MACHINES and don’t use mine which contain tiny overheatable batteries.
I won’t be updating to iOS 15 if this crapola is builtin, because as is always true with Apple this is just the start of a featureset that nobody wanted and nobody asked for. Big Surveillance is still a big no for me and I guess iOS 15 will be too.
Tim Apple really didn’t think this one through. It’s one thing to damage users on the server side, denying them use of applications say thru OCSPocalypse. Apple has physical custody of that OCSP server.
But Apple will be using a device in *users physical custody* and any “mistake” or bug in this system will directly damage the user instead of being confined to some server in China or Cupertino.
Gonna be fun when older phones in user’s pockets start getting warm because Tim Apple didn’t care about client versus server side “details” like that. And please don’t tell me that Apple isn’t prone to huge blunders as of late we’ve all seen what they’re “capable” of.
[…] Contrary to January 2020 Reports, Apple Is Not Currently Checking iCloud Photos Against CSAM Hashes […]
Everything Neuenschwander says is contemptuous and disingenuous toward privacy advocates. That he is Apple's Head of Privacy is hugely concerning.
@Michael Interesting, my mistake. I was thrown off by all the articles calling him Apple's Head of Privacy, but Horvath is indeed Chief Privacy Officer. Has she been present in this kerfuffle?
Thanks @JustMe. The article you linked to says that since NCMEC decides what to look for and since it is part of the government, there are 4th amendment concerns.
Also, supposedly, Apple will be deploying this feature to third party apps. It would explain why Khaos Tian says NeuralHash is available in the vision framework.
https://technokilo.com/apple-child-safety-feature-third-party-apps/
Archived here:
@vintner No, I hadn’t seen anything from her lately, so I looked up her LinkedIn to see if maybe she’d moved on, but apparently she’s still CPO. Seems like she’s more on the legal side and Neuenschwander on the tech side, so perhaps that’s why they had him do this round of press about how it works.