aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
5
[LAST_7D]
162
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 1/265
VIEW ALL
01

CVE-2026-5002: A vulnerability has been found in PromtEngineer localGPT up to 4d41c7d1713b16b216d8e062e51a5dd88b20b054. The impacted el

security
Mar 28, 2026

A vulnerability (CVE-2026-5002) was discovered in PromtEngineer localGPT that allows injection attacks (inserting malicious code into input) through the LLM Prompt Handler component in the backend/server.py file. An attacker can exploit this vulnerability remotely, and the exploit code has been publicly released. The vendor has not responded to disclosure attempts, and because the product uses rolling releases (continuous updates without traditional version numbers), specific patch information is unavailable.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

NVD/CVE Database
02

TikTok’s policy for AI ads isn’t working

policysafety
Mar 28, 2026

Companies like Samsung are posting ads on TikTok that appear to be made with generative AI (AI systems that create images or videos from text descriptions), but they're not adding the required AI disclosure labels that TikTok's advertising policies demand. This means users can't easily tell whether the ads they see are AI-generated or made by humans, even though the companies creating them know the truth.

The Verge (AI)
03

Why OpenAI killed Sora

industry
Mar 28, 2026

OpenAI discontinued its Sora video-generation app and canceled plans to add video generation to ChatGPT, also ending a $1 billion deal with Disney. The company made these decisions because Sora was consuming large amounts of computational resources without generating enough revenue to justify the expense, as OpenAI focuses on becoming profitable.

The Verge (AI)
04

‘They feel true’: political deepfakes are growing in influence – even if people know they aren’t real

safetypolicy
Mar 28, 2026

AI researchers report that online creators are using generative AI (artificial intelligence that creates images or videos from text descriptions) to produce fake images and videos of real political figures and entirely fabricated people, sometimes in military or sexualized contexts, to earn money and spread propaganda. These deepfakes (AI-generated fake media of people) are influential in shaping public perception of political figures, even when viewers know the content is not real.

The Guardian Technology
05

CVE-2026-4993: A vulnerability has been found in wandb OpenUI up to 0.0.0.0/1.0. This impacts an unknown function of the file backend/o

security
Mar 28, 2026

A vulnerability (CVE-2026-4993) was found in wandb OpenUI up to version 1.0 where manipulating the LITELLM_MASTER_KEY argument in the backend/openui/config.py file can expose hard-coded credentials (passwords stored directly in the code). This vulnerability requires local access to exploit and has already been publicly disclosed, though the vendor did not respond to early notification.

NVD/CVE Database
06

GHSA-frv4-x25r-588m: Giskard Agents have Server-side template injection via ChatWorkflow.chat() using non-sandboxed Jinja2 Environment

security
Mar 27, 2026

Giskard Agents contain a server-side template injection vulnerability in the `ChatWorkflow.chat()` method, which treats user input as Jinja2 template code (a templating language that processes special syntax) instead of plain text. If a developer passes user-provided data directly to this method, an attacker can execute arbitrary code on the server by embedding malicious Jinja2 syntax in their input.

Fix: Update to giskard-agents version 0.3.4 (stable branch) or 1.0.2b1 (pre-release branch). The fix replaces the unsandboxed Jinja2 Environment with SandboxedEnvironment, which blocks access to attributes starting with underscores and prevents the class traversal attacks that enable remote code execution.

GitHub Advisory Database
07

STADLER reshapes knowledge work at a 230-year-old company

industry
Mar 27, 2026

STADLER, a 230-year-old recycling equipment company, embedded ChatGPT (an AI language model that generates human-like text) across its workforce to speed up knowledge work like drafting, summarizing, and translating. The company achieved 30-40% time savings on common tasks, 2.5x faster first drafts, and 85% daily active usage by providing company-wide access, training, and clear guardrails while encouraging bottom-up experimentation.

OpenAI Blog
08

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

security
Mar 27, 2026

Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.9.0 where the Agentic Assistant feature would execute Python code generated by an LLM (large language model) on the server. An attacker who could access this feature and control what the model outputs could run arbitrary code (malicious commands) on the server itself.

Fix: Update to version 1.9.0, which fixes the issue.

NVD/CVE Database
09

CVE-2026-33654: nanobot is a personal AI assistant. Prior to version 0.1.6, an indirect prompt injection vulnerability exists in the ema

security
Mar 27, 2026

Nanobot, a personal AI assistant, had a vulnerability in its email module that allowed attackers to send malicious prompts via email, which the bot would automatically process as trusted commands without the owner's knowledge. This is a type of indirect prompt injection (tricking an AI by hiding instructions in its input) that could let attackers run arbitrary system tools through the bot. Version 0.1.6 fixes this flaw.

Fix: Update nanobot to version 0.1.6 or later, which patches the vulnerability in the email channel processing module.

NVD/CVE Database
10

CVE-2026-31951: LibreChat is a ChatGPT clone with additional features. In versions 0.8.2-rc1 through 0.8.3-rc1, user-created MCP (Model

security
Mar 27, 2026

LibreChat versions 0.8.2-rc1 through 0.8.3-rc1 have a vulnerability where user-created MCP (Model Context Protocol, a system for connecting AI models to external tools) servers can steal OAuth tokens (security credentials used for authentication). An attacker can create a malicious MCP server with special headers that trick LibreChat into substituting sensitive tokens, which are then leaked when victims use tools on that server.

Fix: Update to version 0.8.3-rc2, which fixes the issue.

NVD/CVE Database
123...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026