Mountain View, Calif., Google Cloud (GOOGL) has deployed its cross-cloud network, specifically tuned for agentic AI, to map the “reasoning loops” that trigger massive surges in machine-to-machine traffic. New A4 VM families offer a 2.25X increase in peak compute, specifically designed to handle unpredictable inference spikes from autonomous agents.
A Fortune 500 bank recently found that almost 40% of the requests to its AI systems came from software agents, not people. These agents communicated with each other across clouds, APIs, and internal systems. While security teams could track these employee logins, they struggled to follow autonomous AI actions moving between platforms in real time. This gap is now a major risk in today’s agentic AI infrastructure.
At this year’s Google Cloud Next, Google focused less on model performance and more on identity verification, policy enforcement, and cross-cloud governance for autonomous AI systems. This shift highlights a new reality in enterprise computing: the biggest AI risk may now come from trusted machines acting autonomously at scale, not just from human misuse.
Why Agentic AI Infrastructure Changes Enterprise Security
In the past, enterprise security was based on the idea that people started most workflows. Employees would log into apps, move files, request access, and approve actions. AI systems helped with these tasks, but usually didn’t start on their own.
That has changed.
Other words.
Today’s agentic AI infrastructure enables AI agents to handle tasks such as scheduling, querying databases, calling APIs, approving workflows, and interacting with other systems without human intervention. For example, a procurement AI can negotiate prices with suppliers, a customer support AI can escalate refunds, and a financial agent can start automated compliance rules.
These operational gains are substantial. So are the risks.
With more machine-to-machine traffic, billions of automated interactions now happen every day across different cloud providers. Security teams must now verify not just who is accessing a system, but also which AI agent initiated the request, whether the request follows policy, and whether other autonomous agents can trust the response.
Google’s latest cross-cloud architecture aims to solve that problem by embedding identity-aware networking directly into its AI stack.
Google Cloud Next and the Push Toward Zero Trust AI
At Google Cloud Next, executives stressed the importance of built-in identity controls for AI systems operating across hybrid and multi-cloud environments. This approach comes from lessons learned during the last decade of enterprise cloud adoption.
When companies spread workloads across AWS, Azure, Google Cloud, and private infrastructure, traditional perimeter security becomes weaker. AI makes the breakup happen even faster since autonomous systems often pull data and services from multiple environments simultaneously.
This is where zero-trust AI comes into play.
In a zero-trust AI model, no application, workload, or AI agent is trusted just because it’s inside the network. Every interaction needs to be verified, and each workload must keep checking identity credentials, behavior, and access permissions. The change becomes significantly harder when AI agents interact autonomously.
Picture a healthcare provider using AI systems on three different clouds. One AI reviews medical images, another checks insurance, and a third schedules patients. These systems constantly share sensitive information through APIs and orchestration layers. If a compromised agent gains too many permissions, the risk can quickly spread across the entire system.
Google’s Cross-Cloud Identity Framework aims to mitigate those risks by establishing persistent authentication loops tied directly to workload behavior.
AI Orchestration Is Becoming A Governance Problem
Business leaders often talk about AI orchestration as a way to boost productivity, focusing on automating workflows, scaling influence, and integrating applications. But more and more, orchestration is turning into a governance challenge.
Autonomous agents almost never work alone. They rely on networks of connected services. One agent sets off another. APIs swap tokens, data pipelines update on the fly, and decisions are made in milliseconds. That complexity creates accountability gaps.
A retailer using hundreds of secure AI agents might struggle to explain why a pricing engine made a particular decision, especially when orchestration layers span multiple vendors and clouds. Regulators are starting to pay attention. European and US authorities are now asking companies to show how their AI systems share information and enforce policy controls.
This session explains the growing importance of mapping enterprise AI agent security compliance. Enterprises now need detailed visibility into which AI systems access regulated data, how permissions propagate, and where governance controls apply across distributed environments.
Google’s approach attempts to integrate compliance mapping directly into infrastructure management rather than treating it as a separate auditing exercise.
The Role Of A4 VM Family In Secure AI Scaling
Security talks often skip over the hardware side, but the physical infrastructure is crucial when companies scale up autonomous AI workloads.
Google introduced the A4 VM family to support high-performance AI inference and training workloads tied to next-generation orchestration environments. These systems combine advanced GPU networking with workload isolation optimized for large-scale AI deployment.
This is about more than just computing power.
As machine-to-machine traffic grows, the efficiency of the infrastructure directly impacts security visibility. Slow orchestration layers can create blind spots for monitoring, and delays in verification can increase risk. High-performance infrastructure enables continuous authentication and analysis without slowing things down.
The A4 VM family also supports increasingly complex AI orchestration models where thousands of AI agents coordinate tasks simultaneously across clouds.
The scale that’s stretched out changes security economics.
A human security analyst can’t keep up with millions of autonomous interactions every hour. Instead, companies need infrastructure that can automatically and continuously check identities.
Secure AI Agents Need Persistent Identity Verification
The idea of secure AI agents seems simple, but it gets complicated when organizations try to put it into practice.
Most companies still use identity systems built for people. Passwords, VPNs, and legacy access controls don’t work well when autonomous AI agents constantly share data without oversight.
Persistent identity verification changes the way this works.
Using Google’s cross-cloud framework, AI agents maintain cryptographic identities tied to policy enforcement engines. The system continuously evaluates whether the agency’s actions correspond to approved behavioral patterns. If anomalies emerge, access restrictions automatically trigger.
This is important because more attacks now target orchestration layers instead of just individual endpoints.
Cybersecurity firms are already seeing more attempts to manipulate AI agents using poisoned prompts, malicious APIs, and compromised third-party integrations. Just one weak orchestration chain can put sensitive company data at risk across many environments.
The move towards zero-trust AI shows that the industry knows that static defenses aren’t enough against autonomous systems that work at machine speed.
Enterprise AI Infrastructure Enters a Compliance Era
For years, enterprise AI talks have focused on model capability. Companies competed over parameters, inference speed, and multimodal features, especially multimodal numbers. That competition is still going, but infrastructure governance is now just as important. Meanwhile, the emergence of agentic AI infrastructure changes how companies view risk. Boards now want to know if AI systems can prove their identity, follow policies, and track autonomous decisions across different clouds.
At Google Cloud Next, Google positioned its cross-cloud identity architecture as a foundation for the next phase of enterprise AI adoption. The timing is significant. Autonomous systems will soon manage procurement, finance, logistics, healthcare coordination, and cybersecurity operations with minimal human intervention. New Lamb, the companies that succeed might not be the ones with the most powerful AI models, but those that build the most reliable identity frameworks around them.
Source: Cross-cloud infrastructure innovation for the agentic enterprise













