aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
29
[LAST_7D]
174
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 81/270
VIEW ALL
01

CVE-2026-27966: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.8.0, the CSV Agent nod

security
Feb 25, 2026

Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.8.0 where the CSV Agent node automatically enabled a dangerous Python execution feature. This allowed attackers to run arbitrary Python and operating system commands on the server through prompt injection (tricking the AI by hiding instructions in its input), resulting in RCE (remote code execution, where an attacker can run commands on a system they don't own).

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

Fix: Version 1.8.0 fixes the issue.

NVD/CVE Database
02

Gushwork bets on AI search for customer leads — and early results are emerging

industry
Feb 25, 2026

Gushwork, an India-founded startup, is helping businesses get discovered through AI-powered search tools (systems like ChatGPT and Perplexity that use artificial intelligence to answer questions) by automatically creating search-optimized content and building backlinks (links from other websites that point to a business's site). The company raised $9 million in funding and reports that AI-driven search and chat platforms now account for about 40% of inbound leads for its customers, despite representing only 20% of website traffic.

TechCrunch
03

Chinese Police Use ChatGPT to Smear Japan PM Takaichi

security
Feb 25, 2026

A Chinese internet activist accidentally exposed details about coordinated political influence operations (organized campaigns to manipulate public opinion) that used ChatGPT to create negative content about Japan's Prime Minister Takaichi. The leak revealed how ChatGPT was being used as a tool to generate misleading material for political purposes.

Dark Reading
04

Anthropic acquires computer-use AI startup Vercept after Meta poached one of its founders

industry
Feb 25, 2026

Anthropic acquired Vercept, an AI startup that built tools for agentic tasks (AI systems that can independently perform complex actions), including a product called Vy that could control remote computers. Vercept's product will shut down on March 25, with some co-founders joining Anthropic while others, including investor Oren Etzioni, expressed disappointment about the acquisition ending the startup after just over a year.

TechCrunch
05

Former Alphabet 'moonshot' robotics company Intrinsic is folding into Google

industry
Feb 25, 2026

Alphabet is folding its robotics software company Intrinsic into Google to streamline its business. Intrinsic developed Flowstate, a web-based platform that lets users build robotic applications without writing thousands of lines of code, addressing the challenge that programming robots remains extremely complex despite hardware becoming cheaper. By joining Google, Intrinsic will use Google's AI models and infrastructure to expand its industrial robotics platform for manufacturing and logistics.

CNBC Technology
06

GHSA-mhr3-j7m5-c7c9: LangGraph: BaseCache Deserialization of Untrusted Data may lead to Remote Code Execution

security
Feb 25, 2026

LangGraph versions before 4.0.0 have a remote code execution vulnerability in their caching layer when applications enable cache backends and opt nodes into caching. The vulnerability occurs because the default serializer uses pickle deserialization (a Python feature that can execute arbitrary code) as a fallback when other serialization methods fail, allowing attackers who can write to the cache to execute malicious code.

Fix: Upgrade to langgraph-checkpoint>=4.0.0, which disables pickle fallback by default (pickle_fallback=False).

GitHub Advisory Database
07

GHSA-76rv-2r9v-c5m6: zae-limiter: DynamoDB hot partition throttling enables per-entity Denial of Service

security
Feb 25, 2026

The zae-limiter library has a security flaw where all rate limit buckets for a single user share the same DynamoDB partition key (the identifier that determines which storage location holds the data), allowing a high-traffic user to exceed DynamoDB's write limits and cause service slowdowns for that user and potentially others sharing the same partition. This vulnerability affects multi-tenant systems, like shared LLM proxies (AI services shared across multiple customers), where one customer's heavy traffic can degrade service for others.

Fix: The source explicitly describes a remediation design called 'Pre-Shard Buckets' that includes: moving buckets to a new partition key format with sharding (`PK={ns}/BUCKET#{entity}#{resource}#{shard}, SK=#STATE`), auto-injecting a `wcu:1000` reserved limit on every bucket to track DynamoDB write pressure, implementing shard doubling (1→2→4→8) when capacity is exhausted, storing original limits on the bucket with effective limits derived by dividing by shard count, using random or round-robin shard selection with retry logic (maximum 2 retries), lazy shard creation on first access, discovering shards via GSI3 (a secondary index), and implementing a clean break migration with a schema version bump so old buckets are ignored and new buckets are created on first access.

GitHub Advisory Database
08

GHSA-vpcf-gvg4-6qwr: n8n: Expression Sandbox Escape Leads to RCE

security
Feb 25, 2026

n8n, a workflow automation tool, has a vulnerability where authenticated users with permission to create or modify workflows can exploit expression evaluation (the process of interpreting code within workflow parameters) to execute arbitrary system commands on the host server. This is a serious security flaw because it allows attackers to run unintended commands on the underlying system.

Fix: Upgrade to n8n version 2.10.1, 2.9.3, or 1.123.22 or later. If immediate upgrade is not possible, limit workflow creation and editing permissions to fully trusted users only, and deploy n8n in a hardened environment with restricted operating system privileges and network access. However, these temporary mitigations do not fully remediate the risk.

GitHub Advisory Database
09

Flaws in Claude Code Put Developers' Machines at Risk

security
Feb 25, 2026

Flaws have been discovered in Claude (an AI assistant) that can put developers' computers at risk when Claude is used in software development workflows. These vulnerabilities could potentially affect supply chains, which are the networks of companies and systems that work together to deliver software and products.

Dark Reading
10

GHSA-x2mw-7j39-93xq: n8n has Arbitrary Command Execution via File Write and Git Operations

security
Feb 25, 2026

n8n (a workflow automation tool) has a vulnerability where an authenticated user with workflow editing permissions could combine the Read/Write Files from Disk node (a component that modifies files on the server) with git operations (version control commands) to execute arbitrary shell commands (any commands an attacker chooses) on the n8n server. This requires the attacker to already have valid user access.

Fix: The issue has been fixed in n8n versions 2.2.0 and 1.123.8. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, administrators can temporarily: (1) Limit workflow creation and editing permissions to fully trusted users only, or (2) Disable the Read/Write Files from Disk node by adding `n8n-nodes-base.readWriteFile` to the `NODES_EXCLUDE` environment variable. The source notes these workarounds do not fully remediate the risk and should only be short-term measures.

GitHub Advisory Database
Prev1...7980818283...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026