Privacy of Photos.app’s Enhanced Visual Search
Jeff Johnson (Mastodon, Hacker News, Reddit, 2, The Verge, Yahoo):
This morning while perusing the settings of a bunch of apps on my iPhone, I discovered a new setting for Photos that was enabled by default: Enhanced Visual Search.
[…]
There appear to be only two relevant documents on Apple's website, the first of which is a legal notice about Photos & Privacy:
Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. We apply homomorphic encryption and differential privacy, and use an OHTTP relay that hides IP address. This prevents Apple from learning about the information in your photos. You can turn off Enhanced Visual Search at any time on your iOS or iPadOS device by going to Settings > Apps > Photos. On Mac, open Photos and go to Settings > General.
The second online Apple document is a blog post by Machine Learning Research titled Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem and published on October 24, 2024. (Note that iOS 18 and macOS 15 were released to the public on September 16.)
As far as I can tell, this was added in macOS 15.1 and iOS 18.1, not in the initial releases, but it’s hard to know for sure since none of Apple’s release notes mention the name of the feature.
It ought to be up to the individual user to decide their own tolerance for the risk of privacy violations. In this specific case, I have no tolerance for risk, because I simply have no interest in the Enhanced Visual Search feature, even if it happened to work flawlessly. There’s no benefit to outweigh the risk. By enabling the “feature” without asking, Apple disrespects users and their preferences. I never wanted my iPhone to phone home to Apple.
Remember this advertisement? “What happens on your iPhone, stays on your iPhone.”
Apple is being thoughtful about doing this in a (theoretically) privacy-preserving way, but I don’t think the company is living up to its ideals here. Not only is it not opt-in, but you can’t effectively opt out if it starts uploading metadata about your photos before you even use the search feature. It does this even if you’ve already opted out of uploading your photos to iCloud. And “privately matches” is kind of a euphemism. There remains no plain English text saying that it uploads information about your photos and specifically what information that is. You might assume that it’s just sharing GPS coordinates, but apparently it’s actually the content of the photos that’s used for searching.
One piece of data which isn’t shared is location. This is clear as several of my London skyline photos were incorrectly identified as a variety of other cities, including San Francisco, Montreal, and Shanghai.
What I am confused about is what this feature actually does. It sounds like it compares landmarks identified locally against a database too vast to store locally, thus enabling more accurate lookups. It also sounds like matching is done with entirely visual data, and it does not rely on photo metadata. But because Apple did not announce this feature and poorly documents it, we simply do not know. One document says trust us to analyze your photos remotely; another says here are all the technical reasons you can trust us. Nowhere does Apple plainly say what is going on.
[…]
I see this feature implemented with responsibility and privacy in nearly every way, but, because it is poorly explained and enabled by default, it is difficult to trust. Photo libraries are inherently sensitive. It is completely fair for users to be suspicious of this feature.
In a way, this is even less private than the CSAM scanning that Apple abandoned, because it applies to non-iCloud photos and uploads information about all photos, not just ones with suspicious neural hashes. On the other hand, your data supposedly—if their are no design flaws or bugs—remains encrypted and is not linked to your account or IP address.
jchw:
What I want is very simple: I want software that doesn’t send anything to the Internet without some explicit intent first. All of that work to try to make this feature plausibly private is cool engineering work, and there’s absolutely nothing wrong with implementing a feature like this, but it should absolutely be opt-in.
Trust in software will continue to erode until software stops treating end users and their data and resources (e.g. network connections) as the vendor’s own playground. Local on-device data shouldn’t be leaking out of radio interfaces unexpectedly, period. There should be a user intent tied to any feature where local data is sent out to the network.
Apple just crowed about how, if Meta’s interoperability requests were granted, apps the user installed on a device and granted permission to would be able to “scan all of their photos” and that “this is data that Apple itself has chosen not to access.” Yet here we find out that in an October OS update Apple auto-enabled a new feature that sends unspecified information about all your photos to Apple.
I’m seeing a lot of reactions like this:
I’m tired with so much privacy concerns from everyone without any reason… Yes it sends photo data anonymously to make a feature work or improve it. So what? Apple and iOS are the most private company/software out there.
But I’m tired of the double standard where Apple and its fans start from the premise of believing Apple’s marketing. So if you’re silently opted in, and a document somewhere uses buzzwords like “homomorphic encryption” and “differential privacy” without saying which data this even applies to, that’s good enough. You’re supposed to assume that your privacy is being protected because Apple is a good company who means well and doesn’t ship bugs.
You see, another company might “scan” your photos, but Apple is only “privately matching” them. The truth is that, though they are relatively better, they also have a history of sketchy behavior and misleading users about privacy. They define “tracking” so that it doesn’t count when the company running the App Store does it, then send information to data brokers even though they claim not to.
With Apple making privacy a big part of its brand, it is a little surprising this was on by default and/or that Apple hasn’t made a custom prompt for the “not photo library, not contact list, not location, etc.” permissions access. Some small changes to the way software works and interacts with the user can go a long way of building and keeping trust.
I love that Apple is trying to do privacy-related services, but this just appeared at the bottom of my Settings screen over the holiday break when I wasn’t paying attention. It sends data about my private photos to Apple.
I would have loved the chance to read about the architecture, think hard about how much leakage there is in this scheme, but I only learned about it in time to see that it had already been activated on my device. Coincidentally on a vacation where I’ve just taken about 400 photos of recognizable locations.
This is not how you launch a privacy-preserving product if your intentions are good, this is how you slip something under the radar while everyone is distracted.
The issues mentioned in Apple’s blog post are so complex that Apple had to make reference to two of their scientific papers, Scalable Private Search with Wally and Learning with Privacy at Scale, which are even more complex and opaque than the blog post. How many among my critics have read and understood those papers? I’d guess approximately zero.
[…]
In effect, my critics are demanding silence from nearly everyone. According to their criticism, an iPhone user is not entitled to question an iPhone feature. Whatever Apple says must be trusted implicitly. These random internet commenters become self-appointed experts simply by parroting Apple’s words and nodding along as if everything were obvious, despite the fact that it’s not obvious to an actual expert, a famous cryptographer.
Previously:
- Meta’s iOS Interoperability Requests
- Apple Sued for Not Searching iCloud for CSAM
- macOS 15.1
- iOS 18.1 and iPadOS 18.1
- Network Connections From mediaanalysisd
- Scanning iCloud Photos for Child Sexual Abuse
Update (2025-01-02): See also: Hacker News.
If it were off by default that would be a good opportunity for the relatively new TipKit to shine.
The release notes seem to associate Enhanced Visual Search with Apple Intelligence, even though the OS Settings don’t associate it with Apple Intelligence (and I don’t use AI myself).
The relevant note is that in 15.1 the Apple Intelligence section says “Photos search lets you find photos and videos simply by describing what you’re looking for.” I’ve seen reports that the setting was not in 15.0, though its release notes did include: “Natural language photo and video search Search now supports natural language queries and expanded understanding, so you can search for just what you mean, like ‘Shani dancing in a red dress.’”
There are so many questions. Does disabling it on all devices remove the uploaded data? Is it only actually active if you have AI on? Does it work differently depending on if you have AI enabled?
My understanding is that there is nothing to remove because nothing is stored (unless in a log somewhere) and that there is no relation to Apple Intelligence.
I fully get it that Photos isn’t really “calling home” with any personal info. It’s trying to match points of interest, which is actually something most people want to have in travel photos–and it’s doing it with proper masking and anonymization, apparently via pure image hashing.
But it does feel a tad too intrusive, especially considering that matching image hashes is, well, the same thing they’d need to do for CSAM detection, which is a whole other can of worms. But the cynic in me cannot help pointing out that it’s almost as if someone had the feature implemented and then decided to use it for something else “that people would like”. Which has never happened before, right?
I was going through all the privacy settings again today on my mom’s iPhone 13, and noticed that Apple / ios had re-enabled this feature silently (enhanced visual search in Photos app), even though I had explicitly disabled it after reading about it here on HN, the last time.
This isn’t the first time something like this has happened - her phone is not signed into iMessage, and to ensure Apple doesn’t have access to her SMS / RCS, I’ve also disabled “Filter messages from unknown senders”. Two times, over a period of roughly a year, I find that this feature has silently been enabled again.
These settings that turn themselves back on or that say they will opt you out of analytics but don’t actually do so really burn trust.
Update (2025-01-07): Thomas Claburn:
Put more simply: You take a photo; your Mac or iThing locally outlines what it thinks is a landmark or place of interest in the snap; it homomorphically encrypts a representation of that portion of the image in a way that can be analyzed without being decrypted; it sends the encrypted data to a remote server to do that analysis, so that the landmark can be identified from a big database of places; and it receives the suggested location again in encrypted form that it alone can decipher.
If it all works as claimed, and there are no side-channels or other leaks, Apple can’t see what’s in your photos, neither the image data nor the looked-up label.
There are two issues with this, even before considering possible bugs in Apple’s implementation, or side-channels leaking information:
1) as with the CSAM scanning case, they are creating a precedent that will allow authoritarian governments to require other scanning
2) uploading the hash/fingerprint reveals to someone surveilling the network that someone has taken a photo.
[…]
In a previous breach of trust and consent, they also turned on without consent in Safari the Topics API (Orwellianly called “privacy-preserving analytics/attribution” when it is nothing but an infringement of privacy by tracking your interests in the browser itself). Even Google, the most voyeuristic company on the planet, actually asked for permission to do this (albeit with deliberately misleading wording in the request, because Google).
Even if the results are encrypted you don’t control the keys - best case scenario is Photos is generating without telling you and placing it somewhere(?). And the server side could store encrypted results for which they or some other party could have a backdoor or just store them until advances render the enc scheme defeatable. Who gets to audit this?
It is quite easy to see how governments will order Apple to abuse this feature in future, without any need to sabotage or compromise any Apple-supplied homomorphic-encryption / private-query / differential-privacy / ip-address-anonymizing security features[…]
[…]
That government instructs Apple to match “landmark” searches against (non-landmark) images (archetypes) which the government cares about.
[…]
When a match is found, Apple sets a “call the secret police” flag in the search response to the client device (iPhone, Mac, whatever).
[…]
Everyone can analyze the heck out of the Apple anonymous search scheme-- but it doesn’t matter whether it is secure or not. No government will care. Governments will be quite satisfied when Apple just quietly decorates responses to fully-anonymous searches with “call the secret police” flags-- and Apple will “truthfully” boast that it never rats out any users, because the users’ own iPhones or Macs will handle that part of it.
With Enhanced Visual Search, Apple appears to focus solely on the understanding of privacy as secrecy, ignoring the understanding of privacy as ownership, because Enhanced Visual Search was enabled by default, without asking users for permission first. The justification for enabling Enhanced Visual Search by default is presumably that Apple’s privacy protections are so good that secrecy is always maintained, and thus consent is unnecessary.
My argument is that consent is always necessary, and technology, no matter how (allegedly) good, is never a substitute for consent, because user privacy entails user ownership of their data.
[…]
The following is not a sound argument: “Apple keeps your data and metadata perfectly secret, impossible for Apple to read, and therefore Apple has a right to upload your data or metadata to Apple’s servers without your knowledge or agreement.” There’s more to privacy than just secrecy; privacy also means ownership. It means personal choice and consent.
[…]
The oversimplification is that the data from your photos—or metadata, however you want to characterize it—is encrypted, and thus there are no privacy issues. Not even Apple believes this, as is clear from their technical papers. We’re not dealing simply with data at rest but rather data in motion, which raises a whole host of other issues. […] Thus, the question is not only whether Apple’s implementation of Homomorphic Encryption (and Private Information Retrieval and Private Nearest Neighbor Search) is perfect but whether Apple’s entire apparatus of multiple moving parts, involving third parties, anonymization networks, etc., is perfect.
See also: Bruce Schneier.
30 Comments RSS · Twitter · Mastodon
Partly for reasons like this, and partly for how buggy they are, I no longer use Apple services. It's less convenient, certainly, but I simply don't trust them. I don't trust anyone. I'm at the point now where I self-host practically everything I use. I'm not enthusiastic about this at all. That's time I'd rather spend doing something else and outsource those services to someone trustworthy. But there is no one trustworthy.
Oh how I miss using a computer in 2010. Back then I could just do stuff on my Mac, it could work the way I wanted it to, it was elegantly designed and worked great, and everything hadn't gotten completely bogged down and buggy by being service driven.
Another annoyance: This setting has to be turned off on each device individually (maybe this is different if Photos is hooked up to iCloud?).
If we put aside whether Apple is to be trusted or not (I’m in the “not” team) when it comes to privacy, is there anything in the EULA that allows them to do that?
I find Jeff's writing to be a bit sensationalist, and Apple has an alert fatigue problem here. People are simultaneously complaining that macOS keeps introducing more "Worblz would like to reticulate splines. Allow or Deny?" dialogs, but also complaining that users don't have enough opportunity to give consent. Yes, this feature should be opt-in, but making it opt-in means a) people are annoyed about yet another dialog on startup, or b) people don't realize the feature exists at all. (Add to that the entirely self-inflicted complexity this year that nobody can even tell which feature launched with which release; was it 15.0? 15.1? Is it coming with 15.3? Punted this release cycle altogether? Who knows! I'm also unclear why, in macOS 15.3 Beta (24D5034f), the Photos version is Version 10.0 (740.0.160). Does that mean Photos is unchanged since 15.0? Does it mean 15.3 introduces a major upgrade to Photos? Very strange.)
But, as Michael points out, the other issue — and that's on Apple, too — is poor documentation. All over the place. There's no consistent (and typically, no comprehensive) way to find what has changed in recent versions. There's no comprehensive explanation of what actually happens with this Visual Search thing. What data gets transmitted? To whom? How is it encrypted? How does it or doesn't it identify me? This annoys me especially with Apple Pay, where, _as I understand it_, the architecture is quite privacy-preserving in that each actor (you, the bank, the merchant, and Apple) has _relatively_ little information from each other — but I wish Apple made little sequence diagrams to make that clearer. Who transmits what to each other? How long is it stored?
But more simply, I'm unsure what the feature even _does_, and again, that's on Apple. Photos already have location data. Why does there need to be some ML model to match them to visual landmarks? It's obvious from the location whether they're of a landmark or not. Is the feature purely for features that _lack_ location data?
>The issues mentioned in Apple’s blog post are so complex that Apple had to make reference to two of their scientific papers, Scalable Private Search with Wally and Learning with Privacy at Scale, which are even more complex and opaque than the blog post. How many among my critics have read and understood those papers? I’d guess approximately zero.
I'm not even sure what point he's making here. Is the implication that Apple's papers are a mirage? Snake oil?
Because best as I can tell, Apple is approaching it that way because it's about as good as it gets, for now. Yes, you can say "just don't do the feature at all" is sometimes the better approach (and that's what they wound up doing with CSAM), but I don't think "this is very complex" is a meaningful complaint. Technology is complex; therefore, we shouldn't do it?
>In effect, my critics are demanding silence from nearly everyone. According to their criticism, an iPhone user is not entitled to question an iPhone feature.
That's quite the strawman.
No, my issue with Jeff's assertion is that they're reductive, starting with the headline ("Photos phones home" IMHO conjures up all kinds of images like "every time you make a photo, Apple knows of its location") and continuing throughout the text (""What happens on your iPhone, stays on your iPhone." That was demonstrably a lie." — sure, that's extremely simplistic on Apple's part, but Jeff isn't actually making the case that personally created data or personally identifying information ends up on Apple's servers, just that this feature _might_ have a bug that _might_ lead to Apple's servers getting such data).
So, my main issue here isn't the feature per se, or worries about catastrophic bugs (seems to me they've put in multiple safeguards to make those unlikely) so much as that Apple once again documented this poorly, starting with the OOBE (whatever version of Photos introduced this should have some form of UI to let users make an informed choice) and going all the way through with explaining the technical details.
>is there anything in the EULA that allows them to do that?
I don't think the EULA factors in.
Nor do I think the GDPR factors in: if the feature is as explained by Apple, there is no personally identifying information; therefore, they don't need the user's consent. But IANAL. Perhaps someone who has experience with how differential privacy relates to GDPR can chime in.
> I'm not even sure what point he's making here. Is the implication that Apple's papers are a mirage? Snake oil?
I was merely responding to the many extremely condescending internet comments on my article: "My critics appear to argue that either I've neglected to do basic research or that I'm not qualified to raise questions about Enhanced Visual Search if I don't fully understand the technical details."
> That's quite the strawman.
Funny, because you just turned me into a strawman, as I quoted above.
Sören, the professional cryptographer I quoted was as bothered as I was that Apple enabled the feature by default. He didn't say that Apple's scientific papers prove that it's private. To the contrary, he said that this needs external validation. You know, peer review. I don't know why you aren't worried about vulnerabilities, because Apple software ships vulnerabilities all the time.
@remmah
I use iCloud Photo Library and turning it off on one device did not sync that preference to other devices so I had to disable it manually on all of them.
@Rui Carmo
The similarity to the CSAM scanning situation is my concern. We didn't want Apple to build that scanning of our photo libraries because if they build it to detect CSAM a government can then force Apple to also search for terrorism, people seeking abortions, political views they don't like, etc.
This landmarks search works differently in that presumably you get the results and your account isn't "reported" because you have a photo of a landmark, but they have built a photo library scanning technology which is what we all rebelled against a few years ago and puts us back on that slippery slope that we had hoped Apple had abandoned.
The existence of that scanning technology can invite governments to force Apple to spy for them because they already built it, they just have to change what they are looking for and how it is reported. It's existence makes it more difficult for Apple to not comply with such a request. Turning off this preference doesn't protect us from that as they can implement a scan that doesn't have a preference and that they don't tell us about.
We know Apple has compromised on iCloud security for China, VPNs in Russia, etc. We have to trust our own Governments not to force Apple to spy on us, and silently adding this feature doesn't help us trust Apple.
I noticed the same. And promptly turned it off. However, the damage is done, namely something gets sent without my consent. This is also quite likely another 'off' 'button' that will without a shred of doubt turn itself back on at some juncture. I am very happy devs like Jeff Johnson are out there, pointing these things out. He does it in plain english, and while the tide is against those that care, I want him to persist. The naysayers, even here, are part of the bigger problem and always cause me to scratch my head thinking - why criticise someone for pointing this out? Long live Jeff.
Needing to opt out on each device makes it easy to mistakenly have it be re-enabled. You disable it on your iOS 18 phone now but you are running an older OS on your laptop. Will you remember to disable it when you update the laptop in six months?
Steve Jobs had a great quote about why this is wrong, at the D8: All Things Digital Conference in 2010:
“We’ve always had a very different view of privacy than some of our colleagues in the Valley. We take privacy extremely seriously. That’s one of the reasons we have the curated apps store. We have rejected a lot of apps that want to take a lot of your personal data and suck it up into the cloud. Privacy means people know what they’re signing up for. In plain English, and repeatedly, that’s what it means. Ask them. Ask them every time. Make them tell you to stop asking if they get tired of your asking them. Let them know precisely what you’re going to do with their data.”
Wow. Just to clarify, in case anyone misreads my post as a defense of Apple's misbehavior here - I meant that the Steve Jobs quote clearly explains why what Apple did with "Enhanced Visual Search" was a violation of user privacy. It's only been 14 years since that quote, and Apple representative have repeatedly flogged this idea as being something they still claim to follow.
This default opt-in to a system that analyzes our data and uploads something without asking us is an unambiguous violation of this principle.
"People are simultaneously complaining that macOS keeps introducing more "Worblz would like to reticulate splines. Allow or Deny?" dialogs, but also complaining that users don't have enough opportunity to give consent"
They're both valid complaints. Apple should ask consent for stuff like this, and consent should not be asked by constantly throwing up dozens of stupid dialog boxes, often with inscrutable texts that most people will not understand, and often repeatedly because the conssent given by the user expires.
One drastic improvement on the "too many dialogs" front would be Apple taking an idea from web browsers and giving apps the opportunity to state all the privileges they need up front, with the option for the user to grant those privileges right then and there, either all at once or individually.
Right now I have an app that requires accessibility, automation, screen recording and system extension privileges and I have no choice but to annoyingly ask for each one separately, each one with a different dialog, some of which allow granting the privilege in that dialog, and the others requiring the user to dig into the right spot in System Settings to flick the switch. It's a nightmare and a source of tons of support tickets, because users often get it wrong without realizing it.
It is possible to create a custom made dialog that *kind of* improves this situation, but it still's a mess because you can't just add a button in your own dialog to grant the permission, only triggering Apple's dialog to appear and then trying your best to shepherd your users through the error prone process. And that's to say nothing of when TCC bugs out and the permissions simply don't work.
Even better, though, would be something other than dialog prompts. I don't know what, honestly, and I've spent some time thinking about it. But I'd love for someone to come up with a new UI paradigm for handling notifications and alerts that doesn't require these sorts of on screen interruptions and distractions.
DO NOT WANT.
I guess the same drift will happen with neural implants too. You want to take public transport? You'll be needing the latest neural implant OS for that. Oh? It trawls matches your thoughts and dreams in a purely anonymized fashion, for your safety, and you don't want that? I guess you'll have to walk to work then.
So this sounds like Apple here is saying that your info is private and so we’re going to enable thing feature because we’re confident about its privacy.
Other folks are saying that “hey, that sounds great and all but there could be a privacy leak. Prove it.. and your refusal proves you’re not trustworthy.”… (actually, it sounds more like clickbaity headlines basically saying ‘Apple probably turned this on to steal your data’…. the Verge did that recently)
So do we (again) ask for source code for the operating systems so we can audit it? (let’s be serious — who’s really competent to do so? Far better it to use that audit to find bugs and use them as zero-day exploits) Are we expecting them to give source code to us? Security through obscurity has always been Apple’s modus operandi.
Should we trust them? What’s their track record? Has iMessage’s encryption been cracked? Their other encryptions? Differential privacy been debunked?
Apple’s brand is that their north star is user privacy, and while they fall short of expectations sometimes (the recent settlement offer re:accusations of Siri’s inadvertent ‘wiretapping’ (and the sensationalist coverage implying that Apple settled with the accusations that it sold that info to ad firms… like, what??), law enforcement warrant-requried access to your iMessages), I think that their record has been pretty good, all-said, and getting better as the public is more aware of privacy issues (see the CSAM scanning) and Apple has had its hand slapped (batterygate, the new privacy/analytics dialogues during setup, etc.).
So I’m not saying that I can prove they’re actually private (though track record points towards it), but how about some actual proof that they’re not doing this in a private way? I mean… Apple literally states it’s being done in a private way, so they’d be open to class action lawsuits if it actually isn’t.
Also, just like with their aborted CSAM-scanning, if those locations were stored or scanned in some Apple database, the same issues would come up — “Hey government has proof you were at this location”. Apple’s not going to do that. (more likely, they have a lookup table of rooflines… the same one that powers their and Google Map’s wave your phone around the city skyline and we’ll figure out where you are)
I’m sorry but this sounds like connecting dots that could make sense… if it was Google or Facebook, but much less so if it’s Apple.
Actual proof, please.
Their privacy track record is so strong that we shouldn’t question them, and we have to look back all the way to yesterdays news to see that trust violated isn’t really the strong argument you think it is.
Presumably this feature still works if you have Advanced Data Protection enabled, but it seems to violate the promise of that feature.
“Advanced Data Protection uses end-to-end encryption to ensure that a majority of your iCloud data can only be decrypted from your trusted devices.”
- https://www.apple.com/legal/privacy/data/en/advanced-data-protection/
They go on to describe mail, contacts, calendars, and sharing such as a shared note to someone without ADP as exceptions. Photos being analyzed for landmarks happens on device and then data is sent to Apple about the photos, but that is just the sort of thing someone who signed up for ADP is expecting to not happen. They aren’t accessing the photos on iCloud servers, but the promise that no one including Apple can access your photos feels broken by the landmarks search.
@Bri Yes, there needs to be some way to do a combined prompt.
@Someone else No one’s asking for source code. We just want them to tell us what they’re doing and let us choose to opt in (or not). Otherwise they’re going against both of the Steve Jobs principles, as previously mentioned. We’ve discussed iMessage here several times; that’s a case where (a) Apple’s marketing was misleading, and (b) the metadata about who and when you were messaging was not secure. So that track record is not great.
@Eric Enhanced Visual Search has nothing to do with iCloud, AFAIK, so I wouldn’t expect ADP to affect it.
I understand they aren’t accessing photos in iCloud, but on device. It doesn’t violate the promise that they can’t access the data in iCloud, but rather violates the spirit of that promise by accessing the data anyway directly on device and sending data to their landmarks servers.
@Eric I haven’t read the technical papers, but I guess they are claiming that (due to the homomorphic encryption) Apple’s servers are able to process the data even though they can’t decrypt it. So I think this is within the spirit of ADP.
To simplify: neural networks use dot products as a basic operation. Dot products measure whether two vectors are aligned. The Homomorphic encryption Jeff mentions preserves the ability to compute dot products (within some error) when the image is encrypted. Therefore you should be able to algorithms based on dot products to recognize things like "there's a cat in the picture" even if the picture itself is encrypted.
Obviously there is a point where you can describe the picture so well that the fact the image is encrypted is irrelevant. Secondly, this whole argument presumes that there are no failures in Apple's implementation. Seeing the source code doesn't really help in the sense that you're still trusting Apple isn't using something else on their devices and servers. It does help in the sense that you might find errors in their implementation that you can tell them about and they will then (perhaps) fix.
Given that I'm seeing Apple is going to pay $95 million to settle a lawsuit for Siri "eavesdropping" on conversations, rather than defend their reputation... I find all the comments above defending Apple's intentions pretty cultish.
I understand Apple's papers on homomorphic encryption and differential privacy, the implementation is very clever but that does not excuse Apple turning the feature on without consent.
Basically Apple is analyzing your photos on-device to see if there is something that looks like a landmark, calculating a hash/fingerprint vector that encodes the shape of the landmark, encrypting it (for homomorphic encryption), and adding noise (for differential privacy) and sending it to Apple's servers where their FHE scheme allows looking up the fingerprint in a database of landmarks without revealing to Apple or anyone else what the landmark is.
There are two issues with this, even before considering possible bugs in Apple's implementation, or side-channels leaking information:
1) as with the CSAM scanning case, they are creating a precedent that will allow authoritarian governments to require other scanning
2) uploading the hash/fingerprint reveals to someone surveilling the network that someone has taken a photo. A whistleblower taking photos of wrongdoing in a specific location would be exposed, for instance.
As to people complaining about pop-up fatigue, this is used for a frankly trivial and largely useless feature, identifying landmarks in your photos. The permission can be asked for when you actually try to use the feature (someone mentioned TipKit, which is designed for this).
The people arguing Apple deserves the benefit of doubt are completely delusional or suffering from Stockholm Syndrome. Apple is an ad company, increasingly so as the category euphemistically called "Services" (mostly the App Store tax and Advertising) is critical to their revenue growth. No advertising company can be given the benefit of doubt on tracking, as MJ clearly demonstrates in his links to Apple doublespeak on the subject.
In a previous breach of trust and consent, they also turned on without consent in Safari the Topics API (Orwellianly called "privacy-preserving analytics/attribution" when it is nothing but an infringement of privacy by tracking your interests in the browser itself). Even Google, the most voyeuristic company on the planet, actually asked for permission to do this (albeit with deliberately misleading wording in the request, because Google).
It is quite easy to see how governments will order Apple to abuse this feature in future, *without* any need to sabotage or compromise any Apple-supplied homomorphic-encryption / private-query / differential-privacy / ip-address-anonymizing security* features:
1. Some government (any one big enough to influence Apple) wants to find suspicious people (this does not have to relate to CSAM-- could be folks with photos of political targets like Alice Weidel or "bad things" like firearms).
2. That government instructs Apple to match "landmark" searches against (non-landmark) images (archetypes) which the government cares about. (Apparently this will require salting the data provided to the image analyzer on each client device as well as the cloud, but so what? By design that data is updated all the time anyway, and it can be regionally-varied by Apple.)
3. When a match is found, Apple sets a "call the secret police" flag in the search response to the client device (iPhone, Mac, whatever). This is trivial: for the new service, Apple tells us it may send back "Eiffel Tower" or "Not Sure," so Apple can just as well send back "tell the user 'not sure' but also quietly call the BfV and report image of Alice Weidel." (A call-the-cheka flag can easily be obfuscated in the reply message; just for example it could be the computed parity of some not-really-random cryptographic nonce. Also a flag can be per-government, since, for example, the German and British governments might be interested in different "bad things.")
4. Apple's software on the client device sees that "call the secret police" flag set, then sends a non-anonymous, not-privacy-differentialized, not IP-obscured message (with full GPS, wireless-environment, and device-identification data, plus almost certainly a goodly sample of local camera, microphone, and sensor feed) to the secret police.
5. Snoopy government gets exactly what it wanted-- a message clearly identifying each device exposed to anything that government considers suspicious, which also identifies the user(s) in virtually all cases. Plus location, network, and device data to help the government spy upon (or arrest) the user.
Everyone can analyze the heck out of the Apple anonymous search scheme-- but it doesn't matter whether it is secure or not. No government will care. Governments will be quite satisfied when Apple just quietly decorates responses to fully-anonymous searches with "call the secret police" flags-- and Apple will "truthfully" boast that it never rats out any users, because the users' own iPhones or Macs will handle that part of it.
* Maybe Apple's privacy/security scheme will both be implemented correctly and securely *and* will hold up to cryptanalysis. That is possible (though it would only take a small bug or design flaw to compromise the system). As we can see, though, the theoretical security of the query scheme is irrelevant, because Apple controls the rest of the software on the device as well as the cloud servers, so Apple can send arbitrary messages to devices and have them react in arbitrary ways whenever an image matches a target known to Apple, whether or not it is a picture of a "landmark."
The fact that Apple is aggressively forcing this obviously-low-value "find landmarks in images" feature onto users pretty much proves that Apple intends to abuse the feature. Few or no customers need help to recognize landmarks in the photos they take while making tourist visits to those selfsame landmarks. This feature wastes the user's battery and mobile data allowance to provide almost zero value to the user. An on-demand tap-image-to-identify-a-landmark-or-something (flowering plant? mushroom species?) service might be valuable, but no one is asking for a "tell me what I already know by scanning all my images" service. It is obvious that Apple is really deploying a "scan all user images on behalf of hostile spy agencies" feature.
I wish these new features defaulted to, "Off."
This week I discovered my phone was filtering my incoming mail by what it thought was important.
I also found Google Maps in my car deciding for me to avoid highways. That was turned on in two different places.
WTF? Give me an Apple Stupid button as an alternative to Apple Intelligence.
And don't let's get started about Siri's mal-dictation.
How it works:
https://9to5mac.com/2025/01/14/enhanced-visual-search-apple-privacy/
Multiple levels of digital chafe and mingling/mixing. I’m no crypto-gopher but seems pretty private to me. non-attributable, and one-way to me.
This is much worse: open an image containing a "landmark" in the Preview app, then open the Inspector. An icon will appear which, when clicked, will popup information about the landmark. Verified with a JPEG with no GPS metadata. Take a screenshot of the opened image, the same works in the screenshot. This is system level and turning it off in the Photos app has no effect.
Imagine when this is turned on in Safari...
@Bogdan It’s not intuitive at all, but there is an opt-out for that Visual Look Up feature: go to System Settings ‣ Spotlight and uncheck Siri Suggestions.
@Michael Thank you, thank you! I was just freaking out when looking at an image with a panda and this popped up again. The capability thus exists to remotely fingerprint any image viewed on the Mac and this is turned on by default.