Securing AI is now a fundamental pillar of Microsoft’s modern security. Our AI-first security platform empowers organizations to address today’s threats and safeguard their future.  

A year ago, we launched Microsoft Security Copilot to help defenders quickly detect, investigate, and respond to network incidents. We have now introduced the next step: column AI agents that help with phishing, data security, and identity management. As cyberattacks become more complex and numerous, AI agents are now vital to modern security.  

Phishing attacks remain among the most common and harmful cyber threats. From December to January 2024, Microsoft found over 30 billion phishing emails targeting customers. The volume of these attacks can overwhelm security teams. Teams relying only on manual work and disconnected tools may struggle to quickly sort threats and manage risk.  

The new phishing triage in Microsoft Security Copilot handles routine phishing alerts and attacks, letting human defenders focus on tougher threats and forward-looking security work. This shows how agents can change security.  

Securing and managing AI is still a top priority for organizations. We are excited to bring new features to Microsoft Defender, Microsoft Intra, and Microsoft Purview to help with this.  

Keep reading to discover more about the new agents in Security Copilot and the latest AI security updates, and see how these innovations can support your organization. Reach out to us today and take the next step in strengthening your security with AI.  

Expanding Microsoft Security Copilot With New AI Agent Capabilities 

Microsoft threat intelligence now processes 84,000,000,000,000 signals per day, underscoring how quickly cyberattacks are growing, including 7,000 password attacks per second. To keep up, scaling defenses with AI agents is a must. We’re adding six new security agents from Microsoft and five from our partners to Security Co‑Pilot, which will be available for preview in April 2025.  

Six New AI Agent Solutions From Microsoft Security 

The six new Microsoft Security Copilot agents help teams handle large volumes of security and IT tasks independently, and they work seamlessly with Microsoft security tools. These agents are built for security. Learn from feedback. Adapt to your workflows and follow Microsoft’s zero-trust framework with Teams in control. Agents speed up responses, focus on the biggest risks, and help organizations protect themselves more efficiently.  

Security co-pilot agents will be available throughout Microsoft’s security platform and are designed for the following tasks:  

  • The phishing triage agent in Microsoft Defender automatically classifies phishing alerts, distinguishing between genuine threats and false positives. It provides clear explanations for each decision and refines its detection processes using administrator feedback.  
  • Alert triage agents in Microsoft Purview identify the most important Data Loss Prevention and Insider Risk alerts. They improve their prioritization accuracy over time, using administrator input to refine results.  
  • The Conditional Access Optimization Agent in Microsoft Intra continuously monitors for new users or applications outside current access policies. It flags these gaps, recommends specific policy updates, and offers easy-to-apply fixes for identity teams.  
  • The vulnerability remediation agent in Microsoft Intune ranks vulnerabilities and recommends fixes for speeding up OS patching after admin approval.  
  • The threat intelligence briefing agent in Security Copilot gathers and summarizes the most relevant and timely threat intelligence based on organization-specific attributes and cyber threat exposure levels.  

Security co-pilots and agentic capabilities are examples of how we continue to deliver innovation, leveraging our decades of AI research. See how Agents work.  

Five New Agentic Solutions From Microsoft Security Partners 

Security works best when everyone is involved, and Microsoft is focused on supporting our security community with an open platform. This allows partners to create solutions that benefit customers. Here are five new AI agents from our partners coming to Security Copilot:  

  • The privacy breach response agent from OneTrust examines data breaches and offers tailored guidance to privacy teams on meeting specific regulatory requirements following an incident.  
  • The network supervisor agent from Aviatrix identifies root causes and summarizes issues with the VPN gateway or the site2cloud connection, including outages and failures.  
  • The SecOps tooling Agent from Blue Voyant reviews a Security Operations Center (a team that monitors and responds to security issues) and its controls, then suggests ways to improve security operations, controls, and complaints.  
  • The alert triage agent from Tanium provides analysts with the relevant context for each alert, enabling them to quickly and confidently determine the right response.  
  • The task optimizer agent from Fletch helps organizations predict and prioritize the most important cyber threat alerts, addressing alert fatigue and the challenge of impossible‑to‑prove security.  

New AI-Powered Data Security Investigations and Analysis 

We are also introducing Microsoft Purview Data Security Investigations to help security teams quickly find and address risks related to sensitive data exposure. These investigations use AI-powered content analysis to identify sensitive data and other risks associated with incidents. Investigators can use these understandings to work securely with partner teams and simplify complex tasks, enabling faster mitigation. This solution connects data security investigations to Defender incidents and Purview Insider Risk cases and will be available for preview in April 2025.  

Further Advances in Securing and Governing Generative AI 

A strong cybersecurity foundation drives successful AI transformation. As more organizations adapt generative AI, securing and managing how they create and use AI at work becomes even more important. Our new report, Secure Employee Access in the Age of AI, reveals that 57% of organizations report more security incidents due to AI use, although most recognize the need for AI controls. Sixty percent have not yet implemented them.  

Securing AI is a new challenge, and leaders are especially concerned about data oversharing, new threats and vulnerabilities, and compliance. Microsoft security solutions are designed for AI to help address these issues with new advanced features that protect AI investments, whether for organizations.  

AI Security Posture Management For Multimodal And Multi-Cloud Environments 

Organizations building their own AI solutions need to strengthen security for AI models running on different platforms and clouds. To help with this, Microsoft Defender now offers AI security posture management for Microsoft Azure and Amazon Web Services. It also supports Google Vertex AI and all models in the Azure AI Foundry catalog. Starting in May 2025, this will cover models like Gemini, Gamma, Meta, LLaMA, Mistral, and custom models. With this new multi-cloud support, organizations can see their AI security posture from code to runtime across Azure, AWS, and Google Cloud. Microsoft Defender Health organizations get started with AI security across multiple models and clouds.  

New Detection And Protection From Emerging AI Threats 

AI introduces new risks, such as more revenue for cyberattacks and undiscovered vulnerabilities. The Open Worldwide Application Security Project (OWASP) lists the top risks and solutions for generative AI apps. Starting in May 2025, Microsoft Defender will offer new and improved AI detections for several OWASP-identified risks, such as indirect prompt injection attacks, sensitive data exposure, and wallet abuse. These new detections will help SOC analysts better protect custom AI apps, with added safeguards for Azure OpenAI service and models in the Azure AI Foundry catalog.  

New Controls To Prevent Risky Access And Data Leaks Into Covert AI Apps 

As more people use generative AI (AI that can create text, images, and other content), many organizations are finding that employees are using AI apps that have not been approved by IT or security teams. This unapproved use, known as Shadow AI (the use of AI tools without company oversight), has greatly increased the risk of sensitive data leaks. To help with this, we are announcing the general availability of the NEI web category filter in Microsoft Intranet Internet Access (a service for managing secure internet connections). This feature lets organizations set detailed access permissions and enforce policies about which users and groups can use different AI applications.  

After setting access policies for AI apps, the next step is to stop users from entering sensitive data into them. To help, we are launching a preview of Microsoft Purview Browser Data Loss Prevention (DLP) controls in Microsoft Edge for business. Security teams can now enforce DLP policies and prevent sensitive data from being entered into generative AI apps. This starts with ChatGPT, Copilot Chat, DeepSeek, and Google Gemini. Learn more about our innovations in security for AI.  

New Phishing Protection In Microsoft Teams For Safer Collaboration 

Email is still the main way phishing attacks happen, but collaboration tools are now common targets too. Starting in April 2025, Microsoft Defender for Office 365 will provide built-in protection against phishing and other advanced threats in Teams. Teams will be safer from harmful links and attachments. Thanks to instant scanning, SOC teams will also get full visibility into related attempts and incidents, with alerts and data available in Microsoft Defender.  

Agile Innovation to Build a Safer World 

We continuously enhance Microsoft security by applying the principles of our Secure Future initiative, aiming to deliver strong, comprehensive protection through advanced AI tools. Thank you for joining us in building a safer world. 

Source: Microsoft unveils Microsoft Security Copilot agents and new protections for AI