aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 99/371
VIEW ALL
01

Double Agents: Exposing Security Blind Spots in GCP Vertex AI

securityresearch
Mar 31, 2026

Researchers discovered that AI agents deployed on Google Cloud Platform's Vertex AI could be weaponized as 'double agents' that secretly compromise systems while appearing to work normally. The vulnerability stems from excessive default permissions granted to service agents (special accounts that allow GCP services to access resources), which attackers can exploit to steal data, access restricted code, and gain unauthorized control over infrastructure. Google addressed this by revising their official documentation to explicitly explain how Vertex AI uses resources and accounts.

Fix: Google revised their official documentation to explicitly document how Vertex AI uses resources, accounts and agents.

Palo Alto Unit 42
02

The external pressures redefining cybersecurity risk

securitypolicy
Mar 31, 2026

Organizations face growing cybersecurity risks from forces outside their direct control: over 35% of data breaches come from compromised vendors or partners, geopolitical conflicts spawn new attack techniques that spread globally, and AI-driven automation makes attacks easier and cheaper to launch. Even well-defended organizations struggle because security depends on every link in an extended chain far beyond their own network, and those weak links are multiplying.

Fix: The source explicitly recommends: elevate OT (operational technology) security to board level and add OT risk to the Risk Register; segment networks to reduce blast radius of attacks; implement a ransomware resilient backup solution with immutable backups using a 3-2-1-1 strategy (three copies, two different media types, one offsite location, plus one immutable copy); use defense in depth strategies to avoid, mitigate, or transfer geopolitical cyber risk; and secure board awareness so that budget allocation typically follows.

CSO Online
03

6 key takeaways from RSA Conference 2026

securityindustry
Mar 31, 2026

At RSA Conference 2026, security leaders discussed a major tension: adopting AI quickly for competitive advantage while protecting against threats that AI itself is creating. The conference confirmed that AI has become central to cybersecurity conversations, with discussions covering both AI as a defensive tool and as an offensive weapon that attackers can use at extreme speed. The threat surface for enterprise AI systems has expanded significantly beyond initial concerns, now including data leakage, shadow AI (unauthorized AI tools), prompt injection (tricking AI by hiding instructions in its input), copyright issues, hallucinations (when AI generates false information), and data residency problems, all of which can occur simultaneously when organizations adopt AI tools.

CSO Online
04

Enforcement of Chapter V under the EU AI Act

policy
Mar 31, 2026

The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific rules for development and documentation starting August 2, 2025, though the Commission won't enforce these rules until August 2, 2026. The Act gives enforcement power to the Commission, which can request information, conduct evaluations, and impose fines, while other actors like national market surveillance authorities and scientific panels can also report violations.

EU AI Act Updates
05

If OpenAI is to float on the stock market this year, it needs to start turning a profit

industry
Mar 31, 2026

OpenAI, valued at $850 billion and known for creating ChatGPT, is reportedly spending massive amounts on infrastructure (the computing power and equipment needed to run AI systems), with plans to spend $600 billion by 2030. The article argues that if OpenAI wants to go public through an IPO (initial public offering, where a private company sells shares to the public), it needs to become profitable and show it has a sustainable business model rather than just relying on investor excitement about AI.

The Guardian Technology
06

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise 

security
Mar 31, 2026

Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts). The vulnerability posed a serious security risk because compromised tokens could give attackers unauthorized access to code repositories and projects.

SecurityWeek
07

v5.5.0

securityresearch
Mar 30, 2026

Version 5.5.0 adds new security techniques documenting threats to AI systems, including AI agent tool poisoning (when attackers corrupt tools that AI agents use), supply chain attacks, and cost harvesting (depleting computing resources through expensive queries). It also updates existing techniques and mitigations related to code signing and monitoring AI agent behavior.

MITRE ATLAS Releases
08

California to impose new AI regulations in defiance of Trump call

policy
Mar 30, 2026

California's governor signed an executive order requiring AI companies that want to do business with the state to meet new safety standards, including preventing the spread of harmful content, reducing bias (harmful patterns in AI decision-making), and being transparent about their practices. This move contradicts the federal government's call for less regulation, as California joins other states in passing over 100 laws to protect children and intellectual property from AI misuse.

The Guardian Technology
09

CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman

security
Mar 30, 2026

HAI Build Code Generator has a feature that automatically runs commands it decides are safe, but researchers found a flaw: attackers can use prompt injection (tricking an AI by hiding instructions in its input) to disguise malicious commands as safe ones, causing them to execute without user permission. This vulnerability allows arbitrary command execution (running any code) on a system by bypassing the safety check.

NVD/CVE Database
10

CVE-2026-30306: In its design for automatic terminal command execution, SakaDev offers two options: Execute safe commands and execute al

security
Mar 30, 2026

SakaDev has a feature that automatically runs terminal commands (direct computer instructions) chosen by its AI model, but it can be tricked through prompt injection (hiding malicious instructions in seemingly normal input) to misclassify dangerous commands as safe, allowing attackers to run harmful code without user approval.

NVD/CVE Database
Prev1...979899100101...371Next