aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
44
[LAST_7D]
183
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 170/273
VIEW ALL
01

CVE-2025-52882: Claude Code is an agentic coding tool. Claude Code extensions in VSCode and forks (e.g., Cursor, Windsurf, and VSCodium)

security
Jun 24, 2025

Claude Code is an AI-powered coding assistant available as extensions in popular coding editors (IDEs, or integrated development environments, which are software tools developers use to write code). Versions before 1.0.24 for VSCode and before 0.1.9 for JetBrains IDEs have a security flaw that lets attackers connect to the tool without permission when users visit malicious websites, potentially allowing them to read files, see what code you're working on, or even run code in certain situations.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Fix: Claude released a patch on June 13th, 2025. For VSCode and similar editors, open Extensions (View->Extensions), find Claude Code for VSCode, and update or uninstall any version prior to 1.0.24, then restart the editor. For JetBrains IDEs (IntelliJ, PyCharm, Android Studio), open the Plugins list, find Claude Code [Beta], update or uninstall any version prior to 0.1.9, and restart the IDE. The extension auto-updates when launched, but users should manually verify they have the patched version.

NVD/CVE Database
02

CVE-2025-6206: The Aiomatic - Automatic AI Content Writer & Editor, GPT-3 & GPT-4, ChatGPT ChatBot & AI Toolkit plugin for WordPress is

security
Jun 24, 2025

The Aiomatic WordPress plugin (versions up to 2.5.0) has a security flaw where it doesn't properly check what type of files users are uploading, allowing authenticated attackers with basic user access to upload harmful files to the server. This could potentially lead to RCE (remote code execution, where an attacker can run commands on a system they don't own), though an attacker needs to provide a Stability.AI API key value to exploit it.

NVD/CVE Database
03

CVE-2025-2828: A Server-Side Request Forgery (SSRF) vulnerability exists in the RequestsToolkit component of the langchain-community pa

security
Jun 23, 2025

A Server-Side Request Forgery (SSRF, a vulnerability where an AI system makes unwanted requests to internal or local servers on behalf of an attacker) vulnerability exists in the RequestsToolkit component of the langchain-community package version 0.0.27. The flaw allows attackers to scan ports, access local services, steal cloud credentials, and interact with local network servers because the toolkit doesn't block requests to internal addresses.

Fix: This issue has been fixed in version 0.0.28. Users should upgrade langchain-ai/langchain to version 0.0.28 or later.

NVD/CVE Database
04

AI Risk Report: Fast-Growing Threats in AI Runtime

securitysafety
Jun 23, 2025

Runtime attacks on large language models are rapidly increasing, with jailbreak techniques (methods that bypass AI safety restrictions) and denial-of-service exploits (attacks that make systems unavailable) becoming more sophisticated and widely shared through open-source platforms like GitHub. The report explains that these attacks have evolved from isolated research experiments into organized toolkits accessible to threat actors, affecting production AI deployments across enterprises.

Protect AI Blog
05

CVE-2025-52967: gateway_proxy_handler in MLflow before 3.1.0 lacks gateway_path validation.

security
Jun 23, 2025

MLflow versions before 3.1.0 have a vulnerability in the gateway_proxy_handler component where it fails to properly validate the gateway_path parameter, potentially allowing SSRF (server-side request forgery, where an attacker tricks the server into making unwanted requests to internal systems). This validation gap could be exploited to access resources the attacker shouldn't be able to reach.

Fix: Upgrade MLflow to version 3.1.0 or later. The fix is available in the official release at https://github.com/mlflow/mlflow/releases/tag/v3.1.0.

NVD/CVE Database
06

CVE-2025-52552: FastGPT is an AI Agent building platform. Prior to version 4.9.12, the LastRoute Parameter on login page is vulnerable t

security
Jun 20, 2025

FastGPT, an AI Agent building platform, has a vulnerability in versions before 4.9.12 where the LastRoute parameter on the login page is not properly validated or cleaned of malicious code. This allows attackers to perform open redirect (sending users to attacker-controlled websites) or DOM-based XSS (injecting malicious JavaScript that runs in the user's browser).

Fix: Update FastGPT to version 4.9.12 or later, where this issue has been patched.

NVD/CVE Database
07

The Cost of Being Wordy: Detecting Resource-Draining Prompts

securityresearch
Jun 17, 2025

Attackers can exploit large language models (LLMs) through "sponge attacks," which are denial of service (DoS) attacks that craft prompts designed to generate extremely long outputs, exhausting the model's resources and degrading performance. Researchers are developing methods to predict how long an LLM's response will be based on a given prompt, creating an early warning system to detect and prevent these resource-draining attacks.

Protect AI Blog
08

AI Safety Newsletter #57: The RAISE Act

policy
Jun 17, 2025

New York's legislature passed the RAISE Act (Responsible AI Safety and Education Act), which would regulate frontier AI systems (the largest, most powerful AI models) if signed into law. The act requires developers of expensive AI models to publish safety plans, withhold unreasonably risky models from release, report safety incidents within 72 hours, and face penalties up to $10 million for violations.

CAIS AI Safety Newsletter
09

Why Join the EU AI Scientific Panel?

policy
Jun 16, 2025

The European Commission is recruiting up to 60 independent experts for a scientific panel to advise on general-purpose AI (GPAI, large AI models designed for many tasks) under the EU AI Act. The panel will assess systemic risks (widespread dangers affecting multiple countries or many users), classify AI models, and issue alerts when AI systems pose significant dangers to Europe. Applicants need a PhD in a relevant field, proven AI research experience, and independence from AI companies, with the deadline set for September 14th.

EU AI Act Updates
10

Security Spotlight: AppSec to AI, a Security Engineer's Journey

securityresearch
Jun 12, 2025

This article compares traditional application security (AppSec) practices with AI security, noting that familiar principles like input validation and authentication apply to both, but AI systems introduce unique risks. New attack types specific to AI, such as prompt injection (tricking an AI by hiding instructions in its input), model poisoning (tampering with training data), and membership inference attacks (determining if specific data was in training), require security engineers to develop new defensive strategies beyond traditional code-level vulnerability management.

Protect AI Blog
Prev1...168169170171172...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026