aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 51/371
VIEW ALL
01

CVE-2026-40352: FastGPT is an AI Agent building platform. In versions prior to 4.14.9.5, the password change endpoint is vulnerable to N

security
Apr 17, 2026

FastGPT, an AI Agent building platform, has a vulnerability in its password change feature in versions before 4.14.9.5 where attackers can use NoSQL injection (inserting MongoDB operators into input fields to manipulate database queries) to bypass password verification and take over accounts without knowing the current password.

>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Fix: Update FastGPT to version 4.14.9.5 or later, where this issue has been fixed.

NVD/CVE Database
02

CVE-2026-40351: FastGPT is an AI Agent building platform. In versions prior to 4.14.9.5, the password-based login endpoint uses TypeScri

security
Apr 17, 2026

FastGPT, an AI Agent building platform, has a NoSQL injection vulnerability (a type of attack where an attacker tricks the database query by inserting special commands) in its login system before version 4.14.9.5. The vulnerability allows unauthenticated attackers to bypass password checks and log in as any user, including administrators, by sending database operators instead of a real password.

Fix: This issue has been fixed in version 4.14.9.5. Users should upgrade to this version or later.

NVD/CVE Database
03

GHSA-vfp4-8x56-j7c5: OpenClaw: Exec environment denylist missed high-risk interpreter startup variables

security
Apr 17, 2026

OpenClaw missed blocking dangerous environment variables (like VIMINIT, EXINIT, LUA_INIT, and HOSTALIASES) that could be set by users to change how programs start up or behave on the network. This security gap affected OpenClaw versions before 2026.4.10.

Fix: Users should upgrade to openclaw version 2026.4.10 or newer. The latest npm release, openclaw@2026.4.14, already includes the fix, which expands the denylist (a list of blocked items) in the execution environment security policy to cover these high-risk environment variables.

GitHub Advisory Database
04

GHSA-5fw2-mwhh-9947: Flowise: Unauthenticated TTS endpoint accepts arbitrary credential IDs — enables API credit abuse via stored credentials

security
Apr 17, 2026

Flowise has a text-to-speech endpoint that doesn't require authentication but accepts a credential ID (an identifier for stored API keys like OpenAI or ElevenLabs) directly from user input. An attacker can use this to access someone else's stored API credentials and generate speech using the victim's API account, burning their API credits without permission.

Fix: Remove the TTS endpoint from the whitelist (the list of endpoints that don't need login), or add a check to ensure the credential ID matches the chatflow's TTS configuration. The source suggests: 'if (!chatflowId) { return res.status(401).json({ message: "Authentication required" }) }' — meaning if no chatflow ID is provided, the endpoint should reject the request with an authentication error.

GitHub Advisory Database
05

GHSA-w47f-j8rh-wx87: Flowise: Public chatflow endpoints return unsanitized flowData including plaintext API keys, passwords, and credential IDs

security
Apr 17, 2026

Flowise version 3.0.13 has a security flaw where public chatflow endpoints return unsanitized data (raw information without filtering) that includes plaintext API keys, passwords, and credential IDs (unique references to stored login credentials). This happens because the code returns the complete chatflow object without removing sensitive fields, potentially exposing users' third-party account credentials and internal system architecture.

Fix: According to the source, apply sanitization to both public endpoints by calling `sanitizeFlowDataForPublicEndpoint(chatflow)` before returning the response, and ensure the sanitization function removes all `credential`, `password`, `apiKey`, and `secretKey` fields from the flowData. The source notes this sanitization function exists only in unreleased HEAD code, not in released v3.0.13.

GitHub Advisory Database
06

OpenAI’s former Sora boss is leaving

industry
Apr 17, 2026

OpenAI abandoned its Sora video generation tool and Bill Peebles, the leader of the Sora team, is leaving the company. OpenAI is refocusing its priorities away from what it calls "side quests" to concentrate on coding and enterprise products instead.

The Verge (AI)
07

Anthropic’s new cybersecurity model could get it back in the government’s good graces

industrypolicy
Apr 17, 2026

Anthropic, an AI company, faced criticism from the Trump administration over concerns about national security and refused to allow its technology to be used for domestic mass surveillance or fully autonomous weapons without human control. The company is now working to improve its relationship with the government by developing Claude Mythos Preview, a new AI model designed specifically for cybersecurity tasks.

The Verge (AI)
08

Perspective: AI demand is inflated, and only Anthropic is being realistic

industry
Apr 17, 2026

AI companies may be overestimating demand by measuring success through token consumption (the basic units of AI usage, like words and characters), rather than actual business value or return on investment. Anthropic is adjusting its pricing model away from flat monthly fees toward per-token billing and has discontinued third-party tools that were consuming excessive tokens without generating meaningful results, positioning itself better if AI demand projections prove inflated.

Fix: Anthropic's mitigation strategies mentioned in the source include: (1) moving from flat-rate enterprise pricing to per-token billing so revenue reflects actual usage; (2) cutting off third-party agentic tools (like OpenClaw) that were consuming large volumes of tokens unsustainably; and (3) planning infrastructure investment carefully by accounting for a 'cone of uncertainty' (acknowledging that data centers take 1-2 years to build, so companies must estimate future demand carefully rather than over-committing to infrastructure based on inflated projections).

CNBC Technology
09

White House Chief of Staff to Meet With Anthropic CEO Over Its New AI Technology

policy
Apr 17, 2026

The White House is planning a meeting between its Chief of Staff and Anthropic's CEO to discuss Anthropic's new AI technology and concerns about the security of software built with advanced AI models. This reflects ongoing government engagement with major AI labs about how their systems work and potential risks.

SecurityWeek
10

Anthropic's Dario Amodei to meet with White House about Mythos

policy
Apr 17, 2026

Anthropic CEO Dario Amodei is meeting with White House officials to discuss Mythos, a new AI model that can identify security weaknesses in software. This meeting marks a potential improvement in relations between Anthropic and the Trump administration, which had previously blacklisted the company and ordered federal agencies to stop using its Claude AI models, though a court temporarily blocked that directive.

CNBC Technology
Prev1...4950515253...371Next