Private Cloud Compute
Apple (via Ivan Krstić, ArsTechnica):
Apple Intelligence is the personal intelligence system that brings powerful generative models to iPhone, iPad, and Mac. For advanced features that need to reason over complex data with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. For the first time ever, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. Built with custom Apple silicon and a hardened operating system designed for privacy, we believe PCC is the most advanced security architecture ever deployed for cloud AI compute at scale.
[…]
The root of trust for Private Cloud Compute is our compute node: custom-built server hardware that brings the power and security of Apple silicon to the data center, with the same hardware security technologies used in iPhone, including the Secure Enclave and Secure Boot. We paired this hardware with a new operating system: a hardened subset of the foundations of iOS and macOS tailored to support Large Language Model (LLM) inference workloads while presenting an extremely narrow attack surface. This allows us to take advantage of iOS security technologies such as Code Signing and sandboxing.
On top of this foundation, we built a custom set of cloud extensions with privacy in mind. We excluded components that are traditionally critical to data center administration, such as remote shells and system introspection and observability tools. We replaced those general-purpose software components with components that are purpose-built to deterministically provide only a small, restricted set of operational metrics to SRE staff. And finally, we used Swift on Server to build a new Machine Learning stack specifically for hosting our cloud-based foundation model.
[…]
Since Private Cloud Compute needs to be able to access the data in the user’s request to allow a large foundation model to fulfill it, complete end-to-end encryption is not an option. Instead, the PCC compute node must have technical enforcement for the privacy of user data during processing, and must be incapable of retaining user data after its duty cycle is complete.
[…]
Every production Private Cloud Compute software image will be published for independent binary inspection — including the OS, applications, and all relevant executables, which researchers can verify against the measurements in the transparency log.
Then they’re throwing all kinds of processes at the server hardware to make sure the hardware isn’t tampered with. I can’t tell if this prevents hardware attacks, but it seems like a start.
They also use a bunch of protections to ensure that software is legitimate. One is that the software is “stateless” and allegedly doesn’t keep information between user requests. To help ensure this, each server/node reboot re-keys and wipes all storage.
[…]
Of course, knowing that the phone is running a specific piece of software doesn’t help you if you don’t trust the software. So Apple plans to put each binary image into a “transparency log” and publish the software.
But here’s a sticky point: not with the full source code.
Security researchers will get some code and a VM they can use to run the software. They’ll then have to reverse-engineer the binaries to see if they’re doing unexpected things. It’s a little suboptimal.
And I don’t understand how you can tell whether the binary image in the log is actually what’s running on the compute node.
As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won’t opt into this, you won’t necessarily even be told it’s happening. It will just happen. Magically.
[…]
Wrapping up on a more positive note: it’s worth keeping in mind that sometimes the perfect is the enemy of the really good.
[…]
I would imagine they’ll install these servers in a cage at a Chinese cloud provider and they’ll monitor them remotely via a camera. I don’t know how you should feel about that.
Aside from the source code issue, it’s not clear to me what more Apple could reasonably do. Let researches inspect the premises? They’re making a strong effort, but that doesn’t mean this system is actually as private as on-device. You have to trust their design and all the people implementing it and hope there aren’t any bad bugs.
It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this.
I’ve asked a lot of people: “OK, imagine Facebook implemented the same system, you’d be fine using it?” Their answer was “Well, no…” Because at the end of the day this system still fundamentally relies on trust. None of this stuff is actually verifiable. And that becomes crystal clear when you realize that you wouldn’t trust it if you simply switched out the names. No one is saying they’re not trying, but that’s different than having created an actually secure system.
Shell game: We put the data under the “local processing cup,” mention you need servers, start swapping cups around, invent a nonsense term “Private Cloud Compute” & voila! These are SPECIAL servers. That’s how you go from “local matters” to “we’re doing it on servers!”
Something that gets lost in discussions about trust is the kind of trust you actually need. Plenty of people trust Apple’s intentions. But with the cloud you actually further need to trust they, e.g., never write any bugs. That they have perfect hiring that catches someone trying to infiltrate them, despite it being super tempting for a gov to try. That they’ll shut the whole feature down if a gov passes a data retention law. This seems pedantic, but these were Apple’s own arguments in the past.
The so-called “verifiable transparency” of Private Cloud Compute nodes is a bad joke. They’re mostly closed source, so security researchers would have to reverse engineer almost everything. That’s the opposite of transparency.
Only Apple could claim that closed source is transparent. Orwellian doublespeak.
Previously:
Update (2024-06-18): Sean Peisert:
My question is why Apple is doing Private Cloud Computing rather than Confidential Computing (e.g., AMD SEV, Intel TDX) to have entirely hardware-enforced isolation, and I guess the obvious answer is that they haven’t built that level of technology into Apple Silicon yet.
You still need to trust that Apple is running the software they say they are.
You also need to trust that they can ignore the NSA if they get an NSA letter demanding that they secretly change the software to enable NSA snooping.
They can’t tell you if the NSA demands that.
Did I miss something on Apple’s PCC setup? If the attestation chain of trust is ultimately traced back to a private key Apple manages, wouldn’t they be able to fake attestation and trick the end device to talk to nodes that’s running non public PCC software?
Update (2024-06-24): Saagar Jha:
Apple seems to just categorically fail at threat models that involve themselves. I guess for iPhone you just suck it up and use it anyway but for this the whole point is that it’s supposed to be as secure as on-device computation so this is kind of important.
Even shelving insider threat, there are a lot of words for “we did TPM”.
[…]
To be 100% clear: you know how NSO or Cellebrite keep hacking iPhones? This thing is made so that if you do that to PCC, you get to see what is going on inside of it. And because of how TPMs work it will likely send back measurements to your phone that attest cleanly.
The “solution”, as far as I can tell, is that Apple thinks they would catch attempts to hack their servers. Oh yeah also hacking the server is hard because they used Swift and deleted the SSH binary. Not like they ship an OS like that already to a billion people.
Also other people have been grumbling about this but I’ll come out and say it: gtfo with your “auditability”. You don’t care about auditability. You care about your intellectual property. This blog post is hilariously nonsensical.
See also: James Darpinian.
Update (2024-07-02): Rich Mogull:
Here is where Apple outdid itself with its security model. The company needed a mechanism to send the prompt to the cloud securely while maintaining user privacy. The system must then process those prompts—which include sensitive personal data—without Apple or anyone else gaining access to that data. Finally, the system must assure the world that the prior two steps are verifiably true. Instead of simply asking us to trust it, Apple built multiple mechanisms so your device knows whether it can trust the cloud, and the world knows whether it can trust Apple.
[…]
So, Apple can’t track a request back to a device, which prevents an attacker from doing the same unless they can compromise both Apple and the relay service. Should an attacker actually compromise a node and want to send a specific target to it, Apple further defends against steering by performing statistical analysis of load balancers to detect any irregularities in where requests are sent.
[…]
Apple will publish the binary images of the software stack running on PCC nodes. That’s confidence and a great way to ensure the system is truly secure—not just “secure” because it’s obscure.
I don’t know—a binary image is certainly on the spectrum to obscurity. And it is still not clear to me how it can be proven that the image that you inspected is the same as the one that’s actually running on the node.
Update (2024-09-13): Lily Hay Newman (via John Voorhees):
“We set out from the beginning with a goal of how can we extend the kinds of privacy guarantees that we’ve established with processing on-device with iPhone to the cloud—that was the mission statement," Craig Federighi, senior vice president of software engineering at Apple, tells WIRED. “It took breakthroughs on every level to pull this together, but what we’ve done is achieve our goal. I think this sets a new standard for processing in the cloud in the industry.”
I would hope so — an iPhone 15 with an A16 chip is not compatible with Apple Intelligence. An iPhone 15 Pro and its A17 Pro chip would be a better comparison. I do not know whether this error is Apple’s or the reporter’s, but it has survived a full day since the article’s publication.
[…]
Wired appended a cheeky note to the article saying it “was updated with clarification on the Apple Intelligence-generated image Federighi created for his dog’s birthday and additional confirmation that she is a very good dog”.
They “corrected” that and added the name of his dog but didn’t fix the substantive error.
1 Comment RSS · Twitter · Mastodon
If you don't know the difference between Apple and Facebook, you're not paying attention.