Security Research on Private Cloud Compute
Apple (tweet, Hacker News, MacRumors):
In the weeks after we announced Apple Intelligence and PCC, we provided third-party auditors and select security researchers early access to the resources we created to enable this inspection, including the PCC Virtual Research Environment (VRE).
Today we’re making these resources publicly available to invite all security and privacy researchers — or anyone with interest and a technical curiosity — to learn more about PCC and perform their own independent verification of our claims. And we’re excited to announce that we’re expanding Apple Security Bounty to include PCC, with significant rewards for reports of issues with our security or privacy claims.
To help you understand how we designed PCC’s architecture to accomplish each of our core requirements, we’ve published the Private Cloud Compute Security Guide. The guide includes comprehensive technical details about the components of PCC and how they work together to deliver a groundbreaking level of privacy for AI processing in the cloud. The guide covers topics such as: how PCC attestations build on an immutable foundation of features implemented in hardware; how PCC requests are authenticated and routed to provide non-targetability; how we technically ensure that you can inspect the software running in Apple’s data centers; and how PCC’s privacy and security properties hold up in various attack scenarios.
[…]
We’re also making available the source code for certain key components of PCC that help to implement its security and privacy requirements. We provide this source under a limited-use license agreement to allow you to perform deeper analysis of PCC.
It’s interesting to note that Apple’s PCC code is not open source but only available under a limited 90-day license for use as described here. However, posting code on GitHub requires the code to be viewable and forkable. IANAL, but this seems sketch.
All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There’s some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There’s currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it’s game over, it’s not detectable remotely.
Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there’s this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It’s a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren’t satisfied with the audit. Given Apple’s legal resources and famously leak-proof operation, is this a convincing proposition?
Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can’t forge an attestation (assuming absence of bugs) because they don’t have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren’t running any confidential workloads so there’s no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people’s hardware, not being able to intercept the network connections and so on.
[…]
In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that’s all it takes to dismantle the entire architecture.
It’s good that Apple is building in these safeguards because there are many scenarios where they would help. We just need to realize that there are limits to the marketing claims.
The Apple Security Research blog now has an RSS feed, though it’s not properly advertised.
Previously:
Update (2024-11-06): Apple (via Hacker News):
This guide is designed to walk you through these requirements and provide the resources you need to verify them for yourself, including a comprehensive look at the technical design of PCC and the specific implementation details needed to validate it.