aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 108/371
VIEW ALL
01

CVE-2026-33621: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.7.7` throug

security
Mar 26, 2026

PinchTab is an HTTP server (a program that handles web requests) that lets AI agents control a Chrome browser, but versions 0.7.7 through 0.8.4 had incomplete protections against brute-force attacks (rapid repeated requests) on endpoints that check authentication tokens. The middleware (software layer that filters requests) designed to limit requests per IP address was either not activated or had flaws like trusting client-controlled headers, making it easier for attackers to guess weak passwords if they could reach the API.

Fix: This was fully addressed in v0.8.5 by applying RateLimitMiddleware in the production handler chain, deriving the client address from the immediate peer IP instead of trusting forwarded headers by default, and removing the /health and /metrics exemption so auth-checkable endpoints are throttled as well.

NVD/CVE Database
02

CVE-2026-33620: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.7.8` throug

security
Mar 26, 2026

PinchTab, an HTTP server that lets AI agents control Chrome browsers, had a vulnerability in versions 0.7.8 through 0.8.3 where API tokens (credentials that prove you're authorized to use the service) could be passed as URL query parameters, making them visible in logs and browser history instead of being kept private in secure headers. This exposed sensitive credentials to intermediary systems that record full URLs, though it only affected deployments that actually used this method of passing tokens.

Fix: This was addressed in v0.8.4 by removing query-string token authentication and requiring safer header- or session-based authentication flows.

NVD/CVE Database
03

CVE-2026-33619: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab v0.8.3 contains

security
Mar 26, 2026

PinchTab v0.8.3, a tool that lets AI agents control Chrome browsers through an HTTP server, has a server-side request forgery vulnerability (SSRF, where the server can be tricked into making requests to unintended targets) in its optional webhook system. When tasks are submitted with a user-controlled callback URL, the server sends an HTTP request to that URL without properly validating it, allowing attackers to make the server send requests to private or internal network addresses.

Fix: This was addressed in v0.8.4 by validating callback targets before dispatch, rejecting non-public IP ranges, pinning delivery to validated IPs, disabling redirect following, and validating callbackUrl during task submission.

NVD/CVE Database
04

GHSA-cxmw-p77q-wchg: OpenClaw: Arbitrary code execution via unvalidated WebView JavascriptInterface

security
Mar 26, 2026

Android Canvas WebView pages (web content displayed inside an Android app) from untrusted sources could call the JavascriptInterface bridge (a connection that lets web code run native app commands), allowing attackers to inject malicious instructions into the app. The vulnerability was fixed by validating the origin (where the web content comes from) before allowing bridge calls.

Fix: Update to version 2026.3.22 or later. The fix validates page origin and rejects untrusted bridge calls, with trusted origin and path validation now centralized in CanvasActionTrust.kt.

GitHub Advisory Database
05

CISA: New Langflow flaw actively exploited to hijack AI workflows

security
Mar 26, 2026

CISA warns that hackers are actively exploiting CVE-2026-33017, a critical vulnerability (rated 9.3 out of 10) in Langflow, an open-source framework for building AI workflows. This code injection flaw allows attackers to execute arbitrary Python code and gain remote code execution (the ability to run commands on a system they don't own) on unpatched systems running version 1.8.1 or earlier, with exploitation beginning just 20 hours after the vulnerability details were made public.

Fix: System administrators should upgrade to Langflow version 1.9.0 or later, which addresses the vulnerability. Alternatively, administrators can disable or restrict the vulnerable endpoint. Endor Labs additionally recommends not exposing Langflow directly to the internet, monitoring outbound traffic, and rotating API keys, database credentials, and cloud secrets if suspicious activity is detected.

BleepingComputer
06

The CISO’s guide to responding to shadow AI

policysecurity
Mar 26, 2026

Shadow AI refers to AI tools that employees use without approval from their organization, whether these are standalone tools or AI features embedded in existing software that weren't clearly communicated. CISOs (chief information security officers, the executives responsible for an organization's security) need to assess the risks these tools pose, understand why employees are using them, and decide whether to block them or bring them into official company use.

Fix: The source describes a response approach rather than a technical fix: CISOs should (1) assess the specific risk by examining data sensitivity, how the AI provider handles data, and whether a breach occurred, (2) understand why employees are using shadow AI and educate them on risks, (3) check if the organization already has approved tools that meet the same needs, and (4) redirect employees to approved alternatives "with a serious reminder" of approval requirements. The source also notes that organizations with slow AI adoption tend to see more shadow AI use, suggesting faster official adoption may reduce instances.

CSO Online
07

Google’s ‘live’ AI search assistant can handle conversations in dozens more languages

industry
Mar 26, 2026

Google is expanding Search Live, an AI search assistant that lets users search the web using their voice and camera to ask questions about physical objects or tasks. The feature, which initially launched in the US, is now available in over 200 countries and territories in dozens of languages, with Google powering this global expansion using its latest technology.

The Verge (AI)
08

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

security
Mar 26, 2026

A prototype pollution vulnerability (a type of attack that modifies how objects are created in JavaScript) in n8n's GSuiteAdmin node allows authenticated users to execute arbitrary code on the n8n server by crafting malicious workflow parameters. An attacker with permission to create or modify workflows could exploit this to gain control over the entire n8n instance.

Fix: The issue has been fixed in n8n versions 2.14.1, 2.13.3, and 1.123.27. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, administrators can: (1) limit workflow creation and editing permissions to fully trusted users only, or (2) disable the XML node by adding `n8n-nodes-base.xml` to the `NODES_EXCLUDE` environment variable. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database
09

datasette-llm 0.1a2

industry
Mar 26, 2026

This is a brief announcement about datasette-llm version 0.1a2, posted by Simon Willison on March 26, 2026. The post appears to be part of a monthly briefing on LLM (large language model) developments, with a sponsorship offer for readers interested in curated summaries of important AI news.

Simon Willison's Weblog
10

Gemini 3.1 Flash Live: Making audio AI more natural and reliable

industry
Mar 26, 2026

Google has released Gemini 3.1 Flash Live, a new audio model that makes voice conversations with AI sound more natural and reliable by understanding tone better and responding faster. Developers can use it through the Gemini Live API to build voice agents for complex tasks, while regular users can access it through Search Live and Gemini Live across over 200 countries. The model includes audio watermarking (a hidden digital marker added to audio to verify its source) to help prevent misinformation.

DeepMind Safety Research
Prev1...106107108109110...371Next