Google drafts AI agents secure systems against AI hackers
Summary
Google announced new AI agents and security tools designed to help security teams defend against AI-based attacks, particularly in response to threats like Anthropic Mythos. The company introduced three new agents within Google Security Operations to automate threat detection and response, expanded the Wiz platform to provide visibility across multiple cloud environments and AI development tools, and created new security measures like AI-BOM (a system that catalogs all AI components used in an organization) and Agent Gateway to govern how AI agents interact with each other and enforce security policies.
Solution / Mitigation
Google's explicit mitigations include: (1) Three new AI agents in Google Security Operations for threat hunting, detection engineering, and third-party context enrichment, now in or entering preview; (2) Wiz expansion supporting AWS, Azure, Databricks, AWS Agentcore, Gemini Enterprise Agent Platform, Microsoft Azure Copilot Studio, and Salesforce Agentforce with inline scanning of AI-generated code and AI-BOM inventory; (3) Agent Identity and Agent Gateway for governance and policy enforcement; (4) Deeper integrations for Model Armor to mitigate prompt injection (tricking an AI by hiding instructions in its input) and data leakage; (5) Reworked bot and fraud detection through Google Cloud Fraud Defense to distinguish between humans, bots, and AI agents.
Classification
Affected Vendors
Related Issues
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
CVE-2026-40087: LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-str
Original source: https://www.csoonline.com/article/4162560/google-drafts-ai-agents-secure-systems-against-ai-hackers.html
First tracked: April 23, 2026 at 08:00 PM
Classified by LLM (prompt v3) · confidence: 82%