aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
25
[LAST_7D]
172
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 83/270
VIEW ALL
01

Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration

security
Feb 25, 2026

Researchers discovered three security vulnerabilities in Anthropic's Claude Code (an AI-powered coding assistant) that could allow attackers to run arbitrary commands on a developer's computer and steal API keys (authentication credentials) simply by tricking users into opening malicious project folders. The vulnerabilities exploited configuration files and automation systems to bypass safety prompts and execute malicious code without user consent.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

Fix: All three vulnerabilities have been fixed in specific Claude Code versions: the first vulnerability was fixed in version 1.0.87 (September 2025), CVE-2025-59536 was fixed in version 1.0.111 (October 2025), and CVE-2026-21852 was fixed in version 2.0.65 (January 2026). Users should update to these versions or later.

The Hacker News
02

OpenClaw creator’s advice to AI builders is to be more playful and allow yourself time to improve

industry
Feb 25, 2026

Peter Steinberger, creator of OpenClaw (an AI agent that works through WhatsApp), shares advice for developers building with AI: focus on exploration and experimentation rather than having a perfect plan from the start. He emphasizes that working with AI is a learnable skill, like learning guitar, and recommends approaching it playfully and iteratively rather than expecting immediate expertise.

TechCrunch
03

The Blast Radius Problem: Stolen Credentials Are Weaponizing Agentic AI

security
Feb 25, 2026

According to IBM X-Force data from 2025, more than half of the 400,000 tracked vulnerabilities (56%) could be exploited without requiring authentication (the process of verifying who you are). This means attackers can exploit these security flaws without needing to log in or have legitimate access to a system.

SecurityWeek
04

About 12% of U.S. teens turn to AI for emotional support or advice

safetypolicy
Feb 25, 2026

About 12% of U.S. teenagers use AI chatbots for emotional support or advice, alongside more common uses like searching for information and getting homework help. Mental health professionals warn that general-purpose AI tools like ChatGPT are not designed for this purpose and can isolate users from real-world connections and relationships, potentially causing serious psychological harm.

Fix: Character.AI disabled chatbot access for users under 18 following lawsuits related to teen suicides. OpenAI sunset (discontinued) its GPT-4o model, which users had relied on for emotional support.

TechCrunch
05

GHSA-mhc9-48gj-9gp3: Fickling has safety check bypass via REDUCE+BUILD opcode sequence

security
Feb 25, 2026

Fickling (a Python library for analyzing pickle files, a Python serialization format) has a safety bypass where dangerous operations like network connections and file access are falsely marked as safe when certain opcodes (REDUCE and BUILD, which are pickle instructions) appear in sequence. Attackers can add a simple BUILD opcode to any malicious pickle to evade all five of fickling's safety detection methods.

Fix: Potentially unsafe modules have been added to a blocklist in https://github.com/trailofbits/fickling/commit/0c4558d950daf70e134090573450ddcedaf10400.

GitHub Advisory Database
06

Does Anthropic think Claude is alive? Define ‘alive’

safety
Feb 25, 2026

Anthropic executives have suggested in recent interviews that Claude (their AI model) might be alive or conscious in some way, though the company denies Claude is alive like biological organisms. The company avoids directly stating whether Claude is conscious, using the term "alive" as a loaded question while focusing on model welfare research.

The Verge (AI)
07

Jira’s latest update allows AI agents and humans to work side by side

industry
Feb 25, 2026

Atlassian has released a new feature called 'agents in Jira' that lets teams assign work to AI agents (programs that can perform tasks automatically) from the same project management dashboard used for human workers. The update tracks agent progress, sets deadlines, and allows companies to compare how AI agents perform against human employees on the same projects, potentially helping enterprises decide where AI automation is most valuable.

TechCrunch
08

Poisoning AI Training Data

securitysafety
Feb 25, 2026

A researcher demonstrated how easily AI systems can be manipulated by creating false information on a personal website, which major chatbots like Google's Gemini and ChatGPT then repeated as fact within 24 hours, showing that AI training data poisoning (deliberately adding fake information to the data used to teach AI models) is a serious problem because it's so simple to execute.

Schneier on Security
09

Claude’s New AI Vulnerability Scanner Sends Cybersecurity Shares Plunging

industry
Feb 25, 2026

Stock prices for major cybersecurity companies have dropped significantly because of concerns that AI tools, specifically Claude's new vulnerability scanner (a tool that automatically finds security flaws in software), are disrupting the cybersecurity business.

SecurityWeek
10

CVE-2026-27597: Enclave is a secure JavaScript sandbox designed for safe AI agent code execution. Prior to version 2.11.1, it is possibl

security
Feb 24, 2026

Enclave is a secure JavaScript sandbox designed to safely run code from AI agents, but versions before 2.11.1 had a vulnerability that allowed attackers to escape the security boundaries and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own). This weakness is related to code injection (CWE-94, a type of bug where untrusted input is used to generate code that gets executed).

Fix: Update to version 2.11.1 or later. The issue has been fixed in version 2.11.1.

NVD/CVE Database
Prev1...8182838485...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026