aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
5
[LAST_7D]
161
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 6/265
VIEW ALL
01

CVE-2026-33622: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.8.3` throug

security
Mar 26, 2026

PinchTab is an HTTP server that allows AI agents to control a Chrome browser, but versions 0.8.3 through 0.8.5 have a security flaw where two endpoints (POST /wait and POST /tabs/{id}/wait) can execute arbitrary JavaScript (run code of an attacker's choice in the browser) even when JavaScript evaluation is disabled by the operator. Unlike the properly protected POST /evaluate endpoint, these vulnerable endpoints don't check the security policy before running user-provided code, though an attacker still needs valid authentication credentials to exploit it.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Fix: The source states that 'the current worktree fixes this by applying the same policy boundary to `fn` mode in `/wait` that already exists on `/evaluate`, while preserving the non-code wait modes.' However, the source explicitly notes 'as of time of publication, a patched version is not yet available.'

NVD/CVE Database
02

CVE-2026-33621: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.7.7` throug

security
Mar 26, 2026

PinchTab is an HTTP server (a program that handles web requests) that lets AI agents control a Chrome browser, but versions 0.7.7 through 0.8.4 had incomplete protections against brute-force attacks (rapid repeated requests) on endpoints that check authentication tokens. The middleware (software layer that filters requests) designed to limit requests per IP address was either not activated or had flaws like trusting client-controlled headers, making it easier for attackers to guess weak passwords if they could reach the API.

Fix: This was fully addressed in v0.8.5 by applying RateLimitMiddleware in the production handler chain, deriving the client address from the immediate peer IP instead of trusting forwarded headers by default, and removing the /health and /metrics exemption so auth-checkable endpoints are throttled as well.

NVD/CVE Database
03

CVE-2026-33620: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.7.8` throug

security
Mar 26, 2026

PinchTab, an HTTP server that lets AI agents control Chrome browsers, had a vulnerability in versions 0.7.8 through 0.8.3 where API tokens (credentials that prove you're authorized to use the service) could be passed as URL query parameters, making them visible in logs and browser history instead of being kept private in secure headers. This exposed sensitive credentials to intermediary systems that record full URLs, though it only affected deployments that actually used this method of passing tokens.

Fix: This was addressed in v0.8.4 by removing query-string token authentication and requiring safer header- or session-based authentication flows.

NVD/CVE Database
04

CVE-2026-33619: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab v0.8.3 contains

security
Mar 26, 2026

PinchTab v0.8.3, a tool that lets AI agents control Chrome browsers through an HTTP server, has a server-side request forgery vulnerability (SSRF, where the server can be tricked into making requests to unintended targets) in its optional webhook system. When tasks are submitted with a user-controlled callback URL, the server sends an HTTP request to that URL without properly validating it, allowing attackers to make the server send requests to private or internal network addresses.

Fix: This was addressed in v0.8.4 by validating callback targets before dispatch, rejecting non-public IP ranges, pinning delivery to validated IPs, disabling redirect following, and validating callbackUrl during task submission.

NVD/CVE Database
05

GHSA-cxmw-p77q-wchg: OpenClaw: Arbitrary code execution via unvalidated WebView JavascriptInterface

security
Mar 26, 2026

Android Canvas WebView pages (web content displayed inside an Android app) from untrusted sources could call the JavascriptInterface bridge (a connection that lets web code run native app commands), allowing attackers to inject malicious instructions into the app. The vulnerability was fixed by validating the origin (where the web content comes from) before allowing bridge calls.

Fix: Update to version 2026.3.22 or later. The fix validates page origin and rejects untrusted bridge calls, with trusted origin and path validation now centralized in CanvasActionTrust.kt.

GitHub Advisory Database
06

CISA: New Langflow flaw actively exploited to hijack AI workflows

security
Mar 26, 2026

CISA warns that hackers are actively exploiting CVE-2026-33017, a critical vulnerability (rated 9.3 out of 10) in Langflow, an open-source framework for building AI workflows. This code injection flaw allows attackers to execute arbitrary Python code and gain remote code execution (the ability to run commands on a system they don't own) on unpatched systems running version 1.8.1 or earlier, with exploitation beginning just 20 hours after the vulnerability details were made public.

Fix: System administrators should upgrade to Langflow version 1.9.0 or later, which addresses the vulnerability. Alternatively, administrators can disable or restrict the vulnerable endpoint. Endor Labs additionally recommends not exposing Langflow directly to the internet, monitoring outbound traffic, and rotating API keys, database credentials, and cloud secrets if suspicious activity is detected.

BleepingComputer
07

The CISO’s guide to responding to shadow AI

policysecurity
Mar 26, 2026

Shadow AI refers to AI tools that employees use without approval from their organization, whether these are standalone tools or AI features embedded in existing software that weren't clearly communicated. CISOs (chief information security officers, the executives responsible for an organization's security) need to assess the risks these tools pose, understand why employees are using them, and decide whether to block them or bring them into official company use.

Fix: The source describes a response approach rather than a technical fix: CISOs should (1) assess the specific risk by examining data sensitivity, how the AI provider handles data, and whether a breach occurred, (2) understand why employees are using shadow AI and educate them on risks, (3) check if the organization already has approved tools that meet the same needs, and (4) redirect employees to approved alternatives "with a serious reminder" of approval requirements. The source also notes that organizations with slow AI adoption tend to see more shadow AI use, suggesting faster official adoption may reduce instances.

CSO Online
08

Google’s ‘live’ AI search assistant can handle conversations in dozens more languages

industry
Mar 26, 2026

Google is expanding Search Live, an AI search assistant that lets users search the web using their voice and camera to ask questions about physical objects or tasks. The feature, which initially launched in the US, is now available in over 200 countries and territories in dozens of languages, with Google powering this global expansion using its latest technology.

The Verge (AI)
09

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

security
Mar 26, 2026

A prototype pollution vulnerability (a type of attack that modifies how objects are created in JavaScript) in n8n's GSuiteAdmin node allows authenticated users to execute arbitrary code on the n8n server by crafting malicious workflow parameters. An attacker with permission to create or modify workflows could exploit this to gain control over the entire n8n instance.

Fix: The issue has been fixed in n8n versions 2.14.1, 2.13.3, and 1.123.27. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, administrators can: (1) limit workflow creation and editing permissions to fully trusted users only, or (2) disable the XML node by adding `n8n-nodes-base.xml` to the `NODES_EXCLUDE` environment variable. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database
10

Gemini 3.1 Flash Live: Making audio AI more natural and reliable

industry
Mar 26, 2026

Google has released Gemini 3.1 Flash Live, a new audio model that makes voice conversations with AI sound more natural and reliable by understanding tone better and responding faster. Developers can use it through the Gemini Live API to build voice agents for complex tasks, while regular users can access it through Search Live and Gemini Live across over 200 countries. The model includes audio watermarking (a hidden digital marker added to audio to verify its source) to help prevent misinformation.

DeepMind Safety Research
Prev1...45678...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026