aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
23
[LAST_7D]
160
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 182/270
VIEW ALL
01

CVE-2025-2148: A vulnerability was found in PyTorch 2.6.0+cu124. It has been declared as critical. Affected by this vulnerability is th

security
Mar 10, 2025

A critical vulnerability (CVE-2025-2148) was found in PyTorch 2.6.0+cu124 in a function called torch.ops.profiler._call_end_callbacks_on_jit_fut that handles tuples (groups of related data). When the function receives a None argument (a placeholder for "no value"), it causes memory corruption (where data stored in memory gets damaged or overwritten), and the attack can be launched remotely. However, the exploit is difficult to carry out and requires user interaction.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

NVD/CVE Database
02

CVE-2025-1945: picklescan before 0.0.23 fails to detect malicious pickle files inside PyTorch model archives when certain ZIP file flag

security
Mar 10, 2025

picklescan before version 0.0.23 can be tricked into missing malicious pickle files (serialized Python objects) hidden inside PyTorch model archives by modifying certain bits in ZIP file headers. An attacker can use this technique to embed code that runs automatically when someone loads the model with PyTorch, potentially taking over the user's system.

Fix: Upgrade picklescan to version 0.0.23 or later. The fix is available in commit e58e45e0d9e091159c1554f9b04828bbb40b9781 at https://github.com/mmaitre314/picklescan/commit/e58e45e0d9e091159c1554f9b04828bbb40b9781

NVD/CVE Database
03

CVE-2025-1944: picklescan before 0.0.23 is vulnerable to a ZIP archive manipulation attack that causes it to crash when attempting to e

security
Mar 10, 2025

picklescan before version 0.0.23 has a vulnerability where an attacker can manipulate a ZIP archive (a compressed file format) by changing filenames in the ZIP header while keeping the original filename in the directory listing. This causes picklescan to crash with a BadZipFile error when trying to scan PyTorch model files (machine learning models), but PyTorch's more forgiving ZIP handler still loads the model anyway, allowing malicious code to bypass the security scanner.

Fix: Upgrade picklescan to version 0.0.23 or later. The patch is available at https://github.com/mmaitre314/picklescan/commit/e58e45e0d9e091159c1554f9b04828bbb40b9781.

NVD/CVE Database
04

CVE-2024-13882: The Aiomatic - Automatic AI Content Writer & Editor, GPT-3 & GPT-4, ChatGPT ChatBot & AI Toolkit plugin for WordPress is

security
Mar 8, 2025

The Aiomatic WordPress plugin (used to generate AI-written content and images) has a vulnerability in versions up to 2.3.8 that allows authenticated users with Contributor access or higher to upload any type of file to the server due to missing file type validation (checking what kind of file is being uploaded). This could potentially allow attackers to run malicious code on the affected website.

NVD/CVE Database
05

CVE-2024-13816: The Aiomatic - Automatic AI Content Writer & Editor, GPT-3 & GPT-4, ChatGPT ChatBot & AI Toolkit plugin for WordPress is

security
Mar 8, 2025

The Aiomatic WordPress plugin (used for AI-powered content writing) has a security flaw in versions up to 2.3.6 where it fails to check user permissions properly, allowing attackers with basic user accounts (Subscriber level and above) to perform dangerous actions like deleting posts, removing files, and clearing logs that they shouldn't be able to access. This vulnerability puts user data at risk of unauthorized modification or deletion.

Fix: The vulnerability was partially patched in version 2.3.5. Users should update to version 2.3.7 or later for a complete fix (though the source only explicitly mentions a partial patch in 2.3.5).

NVD/CVE Database
06

AI Safety Newsletter #49: Superintelligence Strategy

policysafety
Mar 6, 2025

A new policy paper called 'Superintelligence Strategy' proposes that advanced AI systems surpassing human capabilities in most areas pose national security risks requiring a three-part approach: deterrence (using threat of retaliation to prevent AI dominance races), nonproliferation (restricting advanced AI access to non-state actors like terrorist groups), and competitiveness (building AI strength domestically). The deterrence strategy, called Mutual Assured AI Malfunction (MAIM), mirrors nuclear strategy by threatening cyberattacks on destabilizing AI projects to prevent any single country from gaining dangerous AI superiority.

Fix: The paper explicitly proposes three nonproliferation measures: Compute Security (governments track and monitor high-end AI chips to prevent smuggling), Information Security (AI model weights, which are the trained parameters that define how an AI behaves, are protected like classified intelligence), and AI Security (developers implement technical safety measures to detect and prevent misuse, similar to how DNA synthesis services block orders for dangerous bioweapon sequences).

CAIS AI Safety Newsletter
07

CVE-2025-1953: A vulnerability has been found in vLLM AIBrix 0.2.0 and classified as problematic. Affected by this vulnerability is an

security
Mar 4, 2025

A vulnerability (CVE-2025-1953) was found in vLLM AIBrix 0.2.0 in the Prefix Caching component (a feature that speeds up AI model processing by reusing cached data) that produces insufficiently random values, potentially compromising security. The vulnerability is rated as low severity and difficult to exploit, but it affects the cryptographic security of the system.

Fix: Upgrade to vLLM AIBrix version 0.3.0, which addresses this issue.

NVD/CVE Database
08

CVE-2025-23668: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability in NotFound ChatGPT O

security
Mar 3, 2025

A cross-site scripting (XSS, where an attacker injects malicious code into a webpage to trick users) vulnerability was found in the ChatGPT Open AI Images & Content for WooCommerce plugin, affecting versions up to 2.2.0. The vulnerability allows attackers to inject harmful scripts through reflected XSS (where malicious input is immediately reflected back to the user without proper filtering).

NVD/CVE Database
09

CVE-2025-25185: GPT Academic provides interactive interfaces for large language models. In 3.91 and earlier, GPT Academic does not prope

security
Mar 3, 2025

CVE-2025-25185 is a vulnerability in GPT Academic (version 3.91 and earlier) where the software does not properly handle soft links (special files that point to other files). An attacker can create a malicious soft link, upload it in a compressed tar.gz file, and when the server decompresses it, the soft link will point to sensitive files on the victim's server, allowing the attacker to read all server files.

Fix: A patch is available at https://github.com/binary-husky/gpt_academic/commit/5dffe8627f681d7006cebcba27def038bb691949

NVD/CVE Database
10

Small Businesses’ Guide to the AI Act

policy
Feb 18, 2025

The EU AI Act includes specific support measures for small and medium-sized enterprises (SMEs, defined as companies with fewer than 250 employees and under €50 million in annual revenue). These measures include regulatory sandboxes (controlled testing environments for AI products outside normal regulatory rules), reduced compliance fees scaled to company size, simplified documentation forms, free training, and dedicated support channels to help SMEs follow the AI Act's requirements.

Fix: The source explicitly mentions several mitigation measures for SME compliance: (1) Regulatory sandboxes with free access and simple procedures for SMEs to test AI systems in controlled conditions, (2) Assessment fees proportional to SME size with regular review to lower costs, (3) Simplified technical documentation forms developed by the Commission and accepted by national authorities, (4) Training activities tailored to SMEs, (5) Dedicated guidance channels to answer compliance questions, and (6) Proportionate obligations for AI model providers with separate Key Performance Indicators for SMEs under the Code of Practice.

EU AI Act Updates
Prev1...180181182183184...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026