aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
29
[LAST_7D]
169
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 73/269
VIEW ALL
01

CVE-2026-28416: Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.6.0, a Server-Side Request Fo

security
Feb 27, 2026

Gradio, a Python package for building AI demos, had a vulnerability (SSRF, or server-side request forgery, where an attacker tricks a server into making requests it shouldn't) before version 6.6.0 that let attackers access internal services and private networks by hosting a malicious Gradio Space that victims load with the `gr.load()` function.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

Fix: Update Gradio to version 6.6.0 or later, which fixes the issue.

NVD/CVE Database
02

CVE-2026-28415: Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.6.0, the _redirect_to_target(

security
Feb 27, 2026

Gradio, a Python package for building AI interfaces quickly, has a vulnerability in versions before 6.6.0 where the _redirect_to_target() function doesn't validate the _target_url parameter, allowing attackers to redirect users to malicious external websites through the /logout and /login/callback endpoints on apps using OAuth (a login system). This vulnerability only affects Gradio apps running on Hugging Face Spaces with gr.LoginButton enabled.

Fix: Update to Gradio version 6.6.0 or later. Starting in version 6.6.0, the _target_url parameter is sanitized to only use the path, query, and fragment, stripping any scheme or host.

NVD/CVE Database
03

CVE-2026-28414: Gradio is an open-source Python package designed for quick prototyping. Prior to version 6.7, Gradio apps running on Win

security
Feb 27, 2026

Gradio (an open-source Python package for building web interfaces quickly) has a vulnerability in versions before 6.7 on Windows with Python 3.13 and newer that allows attackers to read any file from the server by exploiting a flaw in how the software checks if file paths are absolute (starting from the root directory). The vulnerability exists because Python 3.13 changed how it defines absolute paths, breaking Gradio's protections against path traversal (accessing files outside intended directories).

Fix: Update Gradio to version 6.7 or later, which fixes the issue.

NVD/CVE Database
04

CVE-2026-27167: Gradio is an open-source Python package designed for quick prototyping. Starting in version 4.16.0 and prior to version

security
Feb 27, 2026

Gradio, a Python package for building web interfaces, has a security flaw in versions 4.16.0 through 6.5.x where it automatically enables fake OAuth routes (authentication shortcuts) that accidentally expose the server owner's Hugging Face access token (a credential used to authenticate with Hugging Face services) to anyone who visits the login page. An attacker can steal this token because the session cookie (a small file storing login information) is signed with a hardcoded secret, making it easy to decode.

Fix: Update to Gradio version 6.6.0, which fixes the issue.

NVD/CVE Database
05

Pentagon moves to designate Anthropic as a supply-chain risk

policy
Feb 27, 2026

President Trump directed federal agencies to stop using Anthropic's AI products and gave them six months to phase out usage, after the company disputed with the Department of Defense. The Pentagon's Secretary of Defense designated Anthropic as a supply-chain risk to national security, meaning military contractors can no longer do business with the company, because Anthropic refused to let its AI models be used for mass domestic surveillance or fully autonomous weapons (systems that make decisions and take action without human control).

TechCrunch
06

Trump Orders All Federal Agencies to Phase Out Use of Anthropic Technology

policysafety
Feb 27, 2026

Anthropic, maker of the AI chatbot Claude, refused the Pentagon's demand to allow unrestricted military use of its technology, citing concerns about safeguards against mass surveillance and autonomous weapons (systems that make decisions without human control). President Trump ordered all federal agencies to stop using Anthropic's technology in response, escalating a public dispute within the AI industry about balancing national security needs with AI safety protections.

SecurityWeek
07

Trump orders federal agencies to drop Anthropic’s AI

policy
Feb 27, 2026

President Trump ordered federal agencies to stop using Claude (an AI system made by Anthropic) after the company's CEO refused to sign a military agreement that would allow unlimited use of their technology. The disagreement centers on whether Anthropic's AI should be available for all military purposes, including domestic surveillance.

The Verge (AI)
08

An AI agent coding skeptic tries AI agent coding, in excessive detail

industry
Feb 27, 2026

A software developer who was skeptical about AI coding agents discovered they have become significantly more capable, using them to build increasingly complex projects including a Rust implementation of machine learning algorithms. The developer notes that recent AI coding models (like Opus 4.6 and Codex 5.3) are dramatically better than earlier versions, but this improvement is hard to communicate publicly without sounding like promotional hype.

Simon Willison's Weblog
09

‘Silent’ Google API key change exposed Gemini AI data

security
Feb 27, 2026

Google's API keys (simple identifiers that were designed only for billing purposes) unexpectedly gained the ability to authenticate access to private Gemini AI project data without any warning to developers. Researchers found 2,863 exposed keys that could let attackers steal files, datasets, and documents, or rack up expensive bills by running the AI model repeatedly.

Fix: Site administrators should check the GCP console for keys allowing the Generative Language API and look for unrestricted keys marked with a yellow warning icon. Exposed keys should be rotated or regenerated (replaced with new ones) with a grace period to avoid breaking apps using the old keys. Google's roadmap includes making API keys created through AI Studio default to Gemini-only access and blocking leaked keys while notifying customers when they detect them.

CSO Online
10

Flaw-Finding AI Assistants Face Criticism for Speed, Accuracy

securityindustry
Feb 27, 2026

AI assistants designed to find security vulnerabilities (weaknesses in software that attackers can exploit) are not yet reliable enough for professional use, despite their potential to help find bugs faster. Experts say current AI tools have problems with both accuracy and speed, making them unsuitable for businesses and developers who need dependable security scanning.

Dark Reading
Prev1...7172737475...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026