Apple’s private cloud compute is a new, highly secure system for running complex AI tasks that go beyond what your device can handle. It is built to ensure user data is never kept or made inaccessible by Apple.  

With verifiable transparency, Apple enables external security experts to audit the exact code executing on silicon-based servers. This mechanism transitions privacy from a stated assurance to a technically verifiable property.  

How Apple’s Private Cloud Compute Ensures Deletion 

PCC operates as a stateless architecture, processing data only for the duration of each request and ensuring no persistence post-completion.  

  • Cryptographic erasure: Data is destroyed immediately after fulfillment of the request. Through cryptographic erasure, recovery is invisible even if server hardware is compromised.  
  • No Administrative Access: The design restricts even Apple administrators from accessing user data during processing.  
  • Secure Enclave and Verified Boot: PCC integrates security features from iPhone and Mac, such as Secure Enclave and Secure Boot, to ensure only authenticated code executes on servers.  

How To Verify Data Deletion (Transparency Logs) 

Apple gives researchers and, in some cases, users ways to check these privacy claims:  

  1. Audit logs column: Apple publishes immutable logs detailing all software deployed on PCC, ensuring tamper-evident record keeping.  
  1. Virtual research environment (VRE): security researchers leverage a VRE, functioning as a private cloud compute node on an Apple silicon Mac, to verify software integrity and ensure no data is stored.  
  1. Device-level verification: Devices verify, via cryptographic attestation, that a server is running code that matches transparency log entries before initiating communication.  

How Users Can View Their Own Data Usage 

While experts check the code, regular users can keep track of their own data used through the Settings app:  

  • To view your data usage, open the Settings app. Tap Privacy and Security, then tap Apple Intelligence Report.  
  • Functionality: This feature lets users create a report of requests sent to the private cloud compute, showing data for the last 15 minutes or the last 7 days.  
  • Export Capability Column: Users can export the report as a .json file for closer inspection.  

Note: The Apple Intelligence report might be empty if no requests were sent to the cloud, since many tasks are handled locally.  

Apple Intelligence is our personal intelligence system that brings generative models to iPhone, iPad, and Mac for features that require handling complex data with larger models. We developed Private Cloud Compute (PCC), a new cloud intelligence system designed for private AI processing. PCC stands out by combining custom Apple silicon and a secure operating system to deliver end-to-end security, ensuring that personal user data sent to PCC is only accessible to the user, not even to Apple. This extends Apple’s device-level. We have developed privacy standards for the cloud, setting PCC apart from standard cloud AI approaches. We believe it is the most advanced security architecture ever created for cloud AI computing at scale.  

Apple focused on device processing to keep user data secure and private. When user data is managed in the cloud, we use security measures such as end-to-end encryption or temporary processing with random identifiers to protect user privacy.  

Making secure, private AI processing in the cloud is a challenge. Data centers use powerful AI hardware for complex machine learning models, but this requires unencrypted access to user requests, making end-to-end encryption unworkable. As a result, Cloud AI must rely on traditional security methods, which present key challenges:  

  • Verifying and enforcing privacy in Cloud AI is difficult. If a service claims not to log data, this is hard to confirm. Software changes can introduce logging without detection, and load balancers might log many user requests during troubleshooting.  
  • Delivering runtime transparency for AI in the cloud is difficult. Cloud AI services are often unclear about the software they use, and these details are usually kept private. Even if a service uses only open-source software that researchers can inspect, there is no common way for a user, device, or browser to confirm that it is connecting to an unmodified version of the software or to notice if the software has changed.  
  • Limiting privileged access in cloud AI environments is difficult. Operations require ongoing monitoring, and during incidents, administrators use tools like SSH. Even with restrictions, enforcing access limits is hard. For example, administrators may inadvertently copy confidential data and steal credentials. Risk of user data theft. Apple devices like the iPhone and Mac can handle computation locally. The security and privacy benefits are obvious. Users control their own devices. Researchers can inspect both hardware and software, and secure boot ensures runtime transparency. Apple does not have privileged access. For example, the data protection file encryption system prevents Apple from disabling or guessing an iPhone passcode.  

However, to handle advanced requests, Apple Intelligence sometimes needs to use larger models in the cloud to meet our users’ security and privacy standards. We are extending our device security approach to the cloud.  

When PCC is available in beta, we will share more details and address researchers’ questions in our next post.

Source: Private Cloud Compute: A new frontier for AI privacy in the cloud 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *