aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 103/371
VIEW ALL
01

Why OpenAI killed Sora

industry
Mar 28, 2026

OpenAI discontinued its Sora video-generation app and canceled plans to add video generation to ChatGPT, also ending a $1 billion deal with Disney. The company made these decisions because Sora was consuming large amounts of computational resources without generating enough revenue to justify the expense, as OpenAI focuses on becoming profitable.

The Verge (AI)
02

‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real

safetypolicy
Mar 28, 2026

AI researchers report that online creators are using generative AI (artificial intelligence that creates images or videos from text descriptions) to produce fake images and videos of real political figures and entirely fabricated people, sometimes in military or sexualized contexts, to earn money and spread propaganda. These deepfakes (AI-generated fake media of people) are influential in shaping public perception of political figures, even when viewers know the content is not real.

The Guardian Technology
03

CVE-2026-4993: A vulnerability has been found in wandb OpenUI up to 0.0.0.0/1.0. This impacts an unknown function of the file backend/o

security
Mar 28, 2026

A vulnerability (CVE-2026-4993) was found in wandb OpenUI up to version 1.0 where manipulating the LITELLM_MASTER_KEY argument in the backend/openui/config.py file can expose hard-coded credentials (passwords stored directly in the code). This vulnerability requires local access to exploit and has already been publicly disclosed, though the vendor did not respond to early notification.

NVD/CVE Database
04

GHSA-frv4-x25r-588m: Giskard Agents have Server-side template injection via ChatWorkflow.chat() using non-sandboxed Jinja2 Environment

security
Mar 27, 2026

Giskard Agents contain a server-side template injection vulnerability in the `ChatWorkflow.chat()` method, which treats user input as Jinja2 template code (a templating language that processes special syntax) instead of plain text. If a developer passes user-provided data directly to this method, an attacker can execute arbitrary code on the server by embedding malicious Jinja2 syntax in their input.

Fix: Update to giskard-agents version 0.3.4 (stable branch) or 1.0.2b1 (pre-release branch). The fix replaces the unsandboxed Jinja2 Environment with SandboxedEnvironment, which blocks access to attributes starting with underscores and prevents the class traversal attacks that enable remote code execution.

GitHub Advisory Database
05

STADLER reshapes knowledge work at a 230-year-old company

industry
Mar 27, 2026

STADLER, a 230-year-old recycling equipment company, embedded ChatGPT (an AI language model that generates human-like text) across its workforce to speed up knowledge work like drafting, summarizing, and translating. The company achieved 30-40% time savings on common tasks, 2.5x faster first drafts, and 85% daily active usage by providing company-wide access, training, and clear guardrails while encouraging bottom-up experimentation.

OpenAI Blog
06

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

security
Mar 27, 2026

Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.9.0 where the Agentic Assistant feature would execute Python code generated by an LLM (large language model) on the server. An attacker who could access this feature and control what the model outputs could run arbitrary code (malicious commands) on the server itself.

Fix: Update to version 1.9.0, which fixes the issue.

NVD/CVE Database
07

CVE-2026-33654: nanobot is a personal AI assistant. Prior to version 0.1.6, an indirect prompt injection vulnerability exists in the ema

security
Mar 27, 2026

Nanobot, a personal AI assistant, had a vulnerability in its email module that allowed attackers to send malicious prompts via email, which the bot would automatically process as trusted commands without the owner's knowledge. This is a type of indirect prompt injection (tricking an AI by hiding instructions in its input) that could let attackers run arbitrary system tools through the bot. Version 0.1.6 fixes this flaw.

Fix: Update nanobot to version 0.1.6 or later, which patches the vulnerability in the email channel processing module.

NVD/CVE Database
08

CVE-2026-31951: LibreChat is a ChatGPT clone with additional features. In versions 0.8.2-rc1 through 0.8.3-rc1, user-created MCP (Model

security
Mar 27, 2026

LibreChat versions 0.8.2-rc1 through 0.8.3-rc1 have a vulnerability where user-created MCP (Model Context Protocol, a system for connecting AI models to external tools) servers can steal OAuth tokens (security credentials used for authentication). An attacker can create a malicious MCP server with special headers that trick LibreChat into substituting sensitive tokens, which are then leaked when victims use tools on that server.

Fix: Update to version 0.8.3-rc2, which fixes the issue.

NVD/CVE Database
09

CVE-2026-31950: LibreChat is a ChatGPT clone with additional features. In versions 0.8.2-rc2 through 0.8.2-rc3, the SSE streaming endpoi

security
Mar 27, 2026

LibreChat (a ChatGPT alternative with extra features) versions 0.8.2-rc2 through 0.8.2-rc3 have a security flaw in the SSE streaming endpoint (a real-time data connection) at `/api/agents/chat/stream/:streamId` that fails to check if a user actually owns a chat stream. This means any logged-in user can guess or obtain another user's stream ID and read their live conversations, including messages and AI responses, without permission.

Fix: Version 0.8.2 patches the issue.

NVD/CVE Database
10

CVE-2026-31945: LibreChat is a ChatGPT clone with additional features. Versions 0.8.2-rc2 through 0.8.2 are vulnerable to a server-side

security
Mar 27, 2026

LibreChat (a ChatGPT alternative with extra features) versions 0.8.2-rc2 through 0.8.2 have a vulnerability that allows attackers to access internal systems through SSRF (server-side request forgery, where an attacker tricks a server into making requests to resources it shouldn't access). Even though a previous SSRF fix was applied, it only checked domain names and didn't verify whether those names actually point to private IP addresses (internal network addresses), leaving the system exposed.

Fix: Update to version 0.8.3-rc1, which contains a patch for this vulnerability.

NVD/CVE Database
Prev1...101102103104105...371Next