As organizations rapidly adopt AI, safeguarding these advances is mission-critical. Google Cloud empowers you to securely develop and deploy AI, addressing compliance and privacy from the start.  

Today, we’re introducing a solution to manage risk throughout the AI lifecycle. AI protection is a tool set designed to secure your AI workloads and data across any cloud or model, regardless of platform.  

AI protection helps teams manage AI risk in several ways:  

  • It discovers AI assets in your environment and checks them for possible vulnerabilities.  
  • It secures AI assets using controls, policies, and guardrails.  
  • It manages threats to AI systems with tools for direction, investigation, and response.  

AI Protection integrates with the Security Command Center to manage security risks across clouds. This provides security teams with a unified view for monitoring AI and cloud risks.  

Discovering AI Inventory 

Managing AI risk begins with knowing where and how AI is used. Our tools automatically find and catalog models, applications, data, and their connections.  

Understanding the data supporting AI applications and protecting that data is critical. Sensitive Data Protection (SDP) identifies and secures sensitive information, now automating data discovery for Vertex AI datasets. SDP displays sensitivity and types of training data, as well as data profiles for deeper insights.  

Once sensitive data locations are identified, AI Protection leverages SCC’s virtual red teaming to detect risky combinations and potential attack paths, and to recommend steps to strengthen security.  

Securing AI Assets 

Model Armor, an AI protection feature, is now available. Model Armor protects AI models against certain attack types, including prompt injection (manipulating AI responses by inserting malicious input), jailbreak (bypassing restrictions on AI behavior), data loss, malicious URLs (web addresses leading to harmful sites), and offensive content. Model Armor works with many models across different clouds, so you get consistent protection for your models and platforms, even if your needs change later.  

Developers can now add Model Armor’s prompt and response screening automatic checks for inappropriate, harmful, or unsafe inputs and outputs to their applications using a REST API (a way for applications to communicate over the web) or by integrating with Apigee (an API management platform). Soon, you’ll be able to use Model Armor inline without changing your apps, thanks to upcoming integrations with Vertex AI and our cloud networking products.  

We are using Model Armor not only because it provides robust protection against prompt injections, jailbreaks, and sensitive data leaks, but also because it helps us achieve a unified security posture through the Security Command Center. We can quickly identify, prioritize, and respond to potential vulnerabilities without impacting the experience of our development teams or the apps themselves. We view Model Armor as critical to safeguarding our AI applications and to centralizing the monitoring of AI security threats alongside our other security findings within SCC. It is a game changer,” said Jay DePaul, Chief Cybersecurity and Technology Risk Officer, Dun & Bradstreet.  

Organizations can use AI protection to enhance the security of Vertex AI applications by applying security postures in the Security Command Center. These controls are built on a deep understanding of Vertex AI’s design, helping you set secure configurations and prevent unwanted changes.  

Managing AI Threats 

AI protection uses security intelligence and research from Google and Mandiant to help protect your AI systems. Security Command Center detectors can spot initial access attempts, privilege escalation, and persistence threats in AI workloads. New detectors based on the latest intelligence, including those for model hijacking, will be available soon.  

“As AI-driven solutions become increasingly commonplace, securing AI systems is paramount and surpasses basic data protection. AI security – by its virtue – necessitates a holistic strategy that includes model integrity, data provenance, compliance, and robust governance,” said Dr. Grace Trinidad, Research Director, IDC.  

Piecemeal solutions can leave critical vulnerabilities exposed, rendering organizations susceptible to threats such as adversarial attacks or data poisoning, and adding to the overwhelming security challenges that security teams already face. A comprehensive lifecycle-focused approach enables organizations to effectively mitigate the multifaceted risks posed by generative AI and manage increasingly complex security workloads. By taking a holistic approach to AI protection, Google Cloud simplifies and thus improves the experience of securing AI for customers,” she said.  

Enhance AI Protection With Expert Support. 

The Mandiant AI security consulting portfolio helps organizations assess and strengthen the security of AI systems across multiple clouds and platforms. Our consultants review your entire AI setup and suggest ways to enhance its security. They also offer red teaming for AI using insights from the latest real-world attacks.  

Building on a Secure Foundation 

Customers can benefit from running AI workloads on Google Cloud’s secure-by-design infrastructure, which features safeguards, encryption, and strict supply chain controls.  

If your AI workloads are regulated, assured workloads create environments with strict policy guardrails, such as data residency, which ensures your data stays within a specified location, and customer-managed encryption, which means you control the encryption keys for your data. Audit Manager demonstrates compliance with regulations and new AI standards by providing reports and evidence of adherence. Confidential computing protects data during processing; this means data remains encrypted and inaccessible to unauthorized parties even from users with system access or internal threats.  

If you want to find unsanctioned or shadow AI use in your workforce, Chrome Enterprise Premium can help. It gives you visibility into end-user activity and helps prevent both accidental and intentional leaks of sensitive data in generative AI applications.  

Next Steps 

Google Cloud remains dedicated to supporting organizations in protecting AI innovations. Additional information is available in the showcase paper from Enterprise Strategy Group and at the online security talks event on March 12th.  

To try AI protection in the Security Command Center or learn about subscription options, contact a Google Cloud sales representative or an authorized partner.  

More exciting capabilities are coming soon, and we will share in-depth details on AI protection and how Google Cloud can help you securely develop and deploy AI solutions at Google Cloud Next in Las Vegas, April 9 to April 11.

Source: Announcing AI Protection: Security for the AI era 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *