Patrick Walsh

Analysis of Apple's New AI Private Compute Cloud

The Good and the Bad of Their Approach to Private Server-side LLMs

Earlier this week Apple announced their upcoming new AI capabilities and a partnership with OpenAI. Apple plans to run AI on-device for many things, but to have an opt-in hand-off to ChatGPT for interactions that are outside of the capability of the on-device models.

However, Apple has long made privacy for its users a key differentiating feature. They’ve invested heavily in technology like end-to-end encryption and in making their devices capable of doing things like face recognition without having to send photos unprotected to a server. But the most capable AI models, like OpenAI’s GPT-4o, are too big to run on a phone no matter how impressive the “neural engine.” And you can’t end-to-end encrypt prompts if the model is running on a server.

Apple’s solution to this problem is to design a system that’s effectively a confidential compute environment (though it isn’t, really – we’ll get to that). They published details around their core requirements and how they went about meeting those requirements in a recent blog, Private Cloud Compute: A new frontier for AI privacy in the Cloud. The rest of this blog will be a reaction to that one and an overview of their Private Cloud Compute (PCC) architecture.

Screenshot of Apple's private compute architecture blog

Before we dive into that design, let’s stop to consider Apple’s position in the world. They’re the 2nd or 3rd largest public company in the world by market cap. Their devices hold the data of around 1.5 billion active users*, and that data is of great interest to the intelligence agencies of nations around the world. Consequently, the target on their back is enormous. The resources behind would-be Apple attackers are next-level. And Apple is are trying to protect the privacy of their users from this level of determined and well-funded adversary. Thus a good portion of their blog talks about physical security – of chips in their supply chain, of physical access to the servers, and even of compromised Apple employees.

How They’ll Secure Prompts and Responses

Custom Hardware and Software

Apple Hardware

Perhaps predictably, Apple is using their own hardware and software for the entire stack starting with Apple Silicon (presumably M4 chips, but they don’t specify), Secure Enclave, and Secure Boot.

A New, Minimal OS

On the software side, they’ve taken the guts of iOS and MacOS and built a new operating system just for this purpose. It includes a minimal set of components focused on core needs and security, pulled from their previous OSes. Components include their code signing verification and sandboxing functionality.

The idea is to remove any features that aren’t needed, to reduce the surface area of potential attacks. This new OS is made to be able to do one thing only: run LLMs. And the OS will only run code cryptographically signed by Apple. They’ve taken steps to ensure memory safety and to avoid the execution of anything other than the server processes needed to generate model inferences. For example, they’ve intentionally excluded any ability to turn on debugging, have remote or local shell capabilities, log in remotely, and more.

Keeping Data Ephemeral

Further, they promise to delete all inputs and outputs immediately after processing and to not log any prompts or responses. In a pretty neat thought exercise, they’ve considered what could happen if an “implementation error” causes prompts to get written to disk anyway. To combat that, they’ve built in automatic crypto-shredding on reboot. This reminds me of the Nix erase your darlings concept, but the erasure mechanism is better.

It works like this: the filesystem gets encrypted with a key. But the key is not persisted anywhere. It’s random and kept in memory, but on the next reboot, that key is gone and the data on that volume can’t be accessed. On startup they create a new volume with a new throw-away key that overwrites the previous one. This is a great way to enforce the deletion of that data.

So this new OS only runs on Apple’s custom hardware stack, can only run some very specific software, provides no ability for administrators to get in, produces only limited metrics for visibility into what it’s doing, and ensures that even persisted information is, to some extent (we don’t know how often they’ll reboot), ephemeral.

This is a good start.

Mechanisms for Anonymization

Authentication

Apple isn’t providing a free global service to everyone, though they’ve said it will be free to Apple users. This means Apple needs some way of authenticating requests to determine what’s valid. However, they don’t want to know who is making the request, which means they need to somehow authenticate without linking a request to an identity.

Each request to the LLM service comes with metadata, but there are no user identifiers in the request – encrypted or not. Instead, the user is authenticated through the use of RSA Blind Signatures (RFC 9474), which is also the authentication mechanism used in Private Relay:

Upon completion of this protocol, the server learns nothing, whereas the client learns sig [the authentication signature]. In particular, this means the server learns nothing of msg or input_msg [the request including the prompt to the LLM] and the client learns nothing of sk [secret authentication key].

So this approach allows a client iPhone device to ask the server to sign a message. The server’s secret key is not available to the iPhone, while the iPhone’s message that’s being signed is kept secret from the server. This request to sign can include identifying information on who is making the request for a first level authorization. The returned signature flows along with any requests to the LLM service, allowing Apple servers to validate that the request comes from an allowed device. Crucially, anonymity of the request to the LLM can be preserved while validating that the requester is authorized.

Anonymizing IP Addresses

Another key weakness in systems like this is the source IP address, which can be a strong indicator of a person of interest or an organization of interest to an attacker. To anonymize the IP addresses, Apple again turns to technology they’ve already built in their Private Relay feature using Oblivious HTTP (RFC 9458). The gist is that some third party (possibly Cloudflare) receives all of the requests and essentially aggregates and proxies them through to Apple. Apple doesn’t see the source IP addresses (they see the IP of the aggregator) and couldn’t target a particular user if compelled, compromised, or for their own purposes – though they could target a particular region.

Protecting Payloads

But the third party is then in a position to potentially see the requests – particularly since they are a proxy and terminate TLS. And in fact, Apple also uses internal load balancers and such which are also in a position to see the contents of requests.

To solve this problem, Apple encrypts the contents of the request twice. First, the request is encrypted directly to the intended server (the chosen private compute node), and then the entire request, including some unencrypted metadata, is protected by TLS. There’s some sort of initial dance where a requesting device asks for available servers for a particular type of request (say a ChatGPT request) and then the load balancer provides a list of available nodes and their public keys. In theory, these public keys are verifiable and the corresponding private keys are locked within Secure Enclaves from which they can’t be extracted.

Randomizing User/Server Associations

Apple goes to great pains to prevent targeted attacks. If an attacker wants to see what Person of Interest X is doing with AI, then they want the attacker to have to compromise the entire system. It’s a core goal for them that data for a particular user will not predictably go to any given server and can’t, through compromise, be steered to a particular server. They do this by anonymizing the requests so the servers can’t know how to steer requests based on user, but also they have a protocol where the server provides a list of potential nodes the client can query and the client device decides which node to actually query. Apple didn’t say how that choice is made, but presumably the selection is random.

In this way, even if a server is compromised, the attacker will get a random sampling of prompts and not necessarily anything of specific interest.

Tamper Resistance

Hardware Supply Chain

Perhaps the most impressive part of this Apple project is their thinking around tamper resistance. They are taking many steps to use certificate transparency type techniques that allow them to record information about manufactured parts and then to validate those values before deployment. They’re also using a third party to monitor the supply chain and provide another layer of verification and validation of the components.

Software Supply Chain

On the software side, every image that will be run is recorded in a public ledger, and clients will not interact with any servers that aren’t listed in the tamper-resistant record:

user devices will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software … once a release has been signed into the log, it cannot be removed without detection, much like the log-backed map data structure used by the Key Transparency mechanism for iMessage Contact Key Verification.

Transparency

They further promise to share every published and allowed image with security researchers who can test them. This is a strong move. Researchers will still be black box testing the images, but Apple says it will (for the first time?) grant access to some of the source code:

to further aid research we will periodically also publish a subset of the security-critical PCC source code

It’s hard for a security researcher to be sure there isn’t some sort of backdoor just by doing blackbox testing of a binary and some code snippets that may or may not reflect what was actually used to build that binary. But this is a difficult problem, and I appreciate that Apple is inviting scrutiny and offering researchers not just the images, but source code and sandbox environments in which to test them. As a further bonus, Apple is providing unencrypted versions of firmware for their security chips so researchers can better attack more of the stack.

The Imperfections

Fallible Developers

Let’s assume Apple’s good intent (I think this investment demonstrates it), but even so, we must assume fallible software developers who make mistakes and who can be bribed and coerced by sophisticated intelligence agencies. Any of these things could lead to weaknesses in the implementation.

An obvious possibility to consider is a vulnerability in the LLM code that leads to a Remote Code Execution and takeover of a PCC node. Apple is doing everything they can to minimize this possibility by shrinking surface areas, using a memory safe language for much of the code, keeping interpreters off of the servers, processing inside sandboxes, requiring signed code only, and so on. It’s a great recipe, but it isn’t impervious.

We’ve seen sophisticated attackers get around these sorts of protections on Apple devices before. The difficulty of the attack is extremely high, so these measures are important. But these LLMs take text and images and video and audio now, and we’ve seen cases in the past where Apple’s parsing of file formats has led to exploits of systems even when the parsing is done in a sandbox behind a “Blast Door”.

Node Compromise Scenario

So despite the restrictions trying to prevent outside code, let’s say an attacker is able to somehow compromise a PCC node. And if they can compromise one, they can compromise many. With this access, what can they do? They could intercept queries and exfiltrate the prompts. That’s bad. But then again, those queries are anonymized, so is there much harm?

Not Really Always Anonymous

To answer that, we need to consider how anonymous prompts really are. Let’s stipulate that you can’t tell from the metadata of the request who made it, aside from the region in which it was made. However, the prompt itself could contain almost anything. People feed entire documents and notes into these things, and that may be done without their full understanding (like if they want to use ChatGPT to query their calendar). The context that rides along in the prompts can have names and addresses and all kinds of things. They could contain the notes of a psychiatrist about their patient, who is named. It could have any number of identifying details, which undercuts all the work Apple has done to anonymize the data.

A compromise of one of these systems and the exfiltration of prompts could be devastating to some people. But critically, they’ve made it very hard to target specific people. If China’s hackers get in, what they really want is to find the juicy Government or high tech secrets. What they’re likely to get is a lot of nonsense from high school homework assignments to fan fiction to who knows what.

And that’s probably the best protection they can offer, realistically. Yeah, highly sophisticated and well funded intelligence agencies may still be able to get in, but the chances of them stumbling into the data they want is pretty low. Apple makes this point in the blog:

An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted Private Cloud Compute users without attempting a broad compromise of the entire PCC system. This must hold true even for exceptionally sophisticated attackers who can attempt physical attacks on PCC nodes in the supply chain or attempt to obtain malicious access to PCC data centers … targeting users should require a wide attack that’s likely to be detected

Not True Confidential Computing

Most attempts to build trusted computing environments where a server can operate on behalf of a user in a provably private and secure way that prevents even admins from seeing requests and responses start with a secure enclave where volatile memory is encrypted and processes on a system are protected from each other. Intel’s SGX and AMD’s SEV features do this and are the basis for most systems of this nature.

Unfortunately, it doesn’t appear that Apple is encrypting memory.

No Discussion of Side Channels

The most alarming attacks against confidential compute environment have been side channel attacks where an attacker can use timing information or power usage or any of a hundred other ancillary measurements to extract data. This sounds like some mystical magic, but researchers have made these attacks very practical, recovering all sorts of private information using these techniques. Similarly, CPU cache attacks allow processes to steal data from other processes even across sandboxes (and this sort of attack basically killed Intel SGX).

Unfortunately, the blog post didn’t indicate that Apple is defending against these types of attacks.

Final Thoughts

Security is based on assumptions. All of our cryptography depends on various assumptions around hard math problems (some of which quantum computers break). The assumptions that Apple made here seem reasonable. This isn’t an impregnable system, but it does seem to be solid. And we only know part of what they’re doing; what we’ve seen from Apple so far is an overview, and they’ve promised to release more details later.

While I’m disappointed they didn’t talk about side channel attacks or leverage encrypted memory, I don’t see any major surface areas of attacks. And holding someone to a standard of perfection is often the enemy in the quest to build a more private and secure digital world. I talked about this recently in my note on the Telegram/Signal “crypto war”.

Apple has an enormous asset: the sheer number of people using their systems generates an incredible amount of noise – an enormous haystack, if you will. If they only hosted Government customers, a nation-state attacker could compromise a single node and make it rain. But because they have a billion and a half active users and because Apple has made targeted attacks extremely difficult, any sophisticated attacker is going to have trouble here. And unsophisticated attacks are probably going to be dead on arrival.

While it’s still likely that there will be vulnerabilities because bugs in software are inevitable, Apple’s embrace of security researchers with transparency and bug bounties is a good mitigation.

Apple’s final sentence in the blog had a bit of a marketing whiff:

With sophisticated technologies to satisfy our requirements of stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency, we believe Private Cloud Compute is nothing short of the world-leading security architecture for cloud AI compute at scale.

The thing is, in this case, they’re absolutely right. Because OpenAI, Microsoft, Google, and other big players have done nothing to protect the privacy of GenAI users. Zero. There’s literally no competition for confidential AI at scale.

Microsoft, in particular, has the ability to produce a version of OpenAI that’s running in a confidential compute environment. They have confidential compute H100 GPUs from Nvidia running in Azure already. Customers can run models confidentially, but Microsoft for some reason doesn’t. It’s baffling to me.

So the bar for creating the best private AI compute architecture at scale was really low. It pleases me that Apple didn’t just clear the low bar, but has set the bar for everyone else at a very high level. This is what we need.