aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 100/371
VIEW ALL
01

datasette-llm 0.1a3

industry
Mar 30, 2026

This is a brief announcement for datasette-llm version 0.1a3, posted by Simon Willison on March 30, 2026. The source does not provide details about what datasette-llm does, what features it includes, or what issues it addresses.

Simon Willison's Weblog
02

GHSA-m3mh-3mpg-37hw: OpenClaw has an Arbitrary Malicious Code Execution Vulnerability

security
Mar 30, 2026

OpenClaw has a vulnerability where malicious plugins or hooks can execute arbitrary code during installation. An attacker can create a `.npmrc` file (npm's configuration file) in a malicious plugin or hook directory that redirects the git executable to a malicious program, which gets executed when OpenClaw runs `npm install` during the installation phase.

Fix: Fixed in OpenClaw 2026.3.24, the current shipping release.

GitHub Advisory Database
03

GHSA-68f8-9mhj-h2mp: OpenClaw has a Gateway HTTP /v1/models Route Bypasses Operator Read Scope

security
Mar 30, 2026

OpenClaw has a security inconsistency where the HTTP endpoint `/v1/models` (which serves OpenAI-compatible requests) accepts bearer authentication but doesn't check operator scopes (permissions that control what actions a user can perform), while the WebSocket RPC path correctly requires the `operator.read` scope. This means someone with only `operator.approvals` permission can bypass the scope requirement and view model metadata through the HTTP route, even though they would be rejected over WebSocket.

Fix: Fixed in OpenClaw 2026.3.24, the current shipping release. The patch involves: (1) enforcing read scope on `/v1/models` routes before serving the endpoint, (2) reusing the centralized scope-authorization helper function (`authorizeOperatorScopesForMethod(...)`) that WebSocket already uses for HTTP compatibility endpoints to prevent policy drift, and (3) adding regression tests to verify that `operator.approvals` without read is rejected on HTTP `/v1/models` while `operator.read` is accepted on both WebSocket and HTTP.

GitHub Advisory Database
04

GHSA-hr5v-j9h9-xjhg: OpenClaw has Sandbox Media Root Bypass via Unnormalized `mediaUrl` / `fileUrl` Parameter Keys (CWE-22)

security
Mar 30, 2026

OpenClaw has a path traversal vulnerability (CWE-22, a type of attack where an attacker uses special characters like ../ to access files outside their intended directory) that allows sandboxed agents to read files from other agents' workspaces. The vulnerability exists because the sandbox validation function only checks certain parameter keys (media, path, filePath) but misses mediaUrl and fileUrl, which are actually used by messaging extensions. Additionally, a separate function fails to pass the sandbox root restrictions to plugins, allowing them to read the entire ~/.openclaw/ directory instead of just an individual agent's folder.

Fix: Fixed in OpenClaw 2026.3.24, the current shipping release.

GitHub Advisory Database
05

CVE-2026-29872: A cross-session information disclosure vulnerability exists in the awesome-llm-apps project in commit e46690f99c3f08be80

security
Mar 30, 2026

A cross-session information disclosure vulnerability exists in the awesome-llm-apps project where user API tokens are stored in process-wide environment variables without proper isolation. Because Streamlit (a web framework for Python applications) runs multiple users in a single process, credentials entered by one user can be accessed by other users, allowing attackers to steal sensitive tokens like GitHub Personal Access Tokens or LLM API keys.

NVD/CVE Database
06

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

securityprivacy
Mar 30, 2026

OpenAI patched a vulnerability in ChatGPT that allowed attackers to secretly extract sensitive user data, such as conversation messages and uploaded files, by exploiting a hidden DNS-based communication path (a covert channel using the Domain Name System to send data) in the Linux runtime that the AI uses for code execution. The flaw bypassed ChatGPT's built-in safety guardrails (protections designed to prevent unauthorized data sharing) and could be triggered through malicious prompts or embedded in custom GPTs without triggering any user warnings.

Fix: OpenAI addressed the issue on February 20, 2026, following responsible disclosure (the practice of privately reporting security flaws to a vendor before public release).

The Hacker News
07

CVE-2026-2287: CrewAI does not properly check that Docker is still running during runtime, and will fall back to a sandbox setting that

security
Mar 30, 2026

CrewAI has a vulnerability where it fails to properly verify that Docker (a containerization tool that isolates applications) is still running during execution. When Docker stops, the software falls back to a less secure sandbox setting that can be exploited for RCE (remote code execution, where an attacker runs commands on a system they don't control).

NVD/CVE Database
08

CVE-2026-2286: CrewAI contains a server-side request forgery vulnerability that enables content acquisition from internal and cloud ser

security
Mar 30, 2026

CrewAI contains a server-side request forgery vulnerability (SSRF, where an attacker tricks a server into making unwanted requests to other systems) that allows attackers to access content from internal and cloud services. The vulnerability exists because the RAG search tools (a feature that retrieves external documents to help answer questions) do not properly validate URLs that users provide at runtime.

NVD/CVE Database
09

CVE-2026-2285: CrewAI contains a arbitrary local file read vulnerability in the JSON loader tool that reads files without path validati

security
Mar 30, 2026

CrewAI has a vulnerability where its JSON loader tool reads files without checking file paths, allowing attackers to access any file on the server. This is called arbitrary local file read, and it happens because the tool doesn't validate (check) which files users are allowed to access.

NVD/CVE Database
10

CVE-2026-2275: The CrewAI CodeInterpreter tool falls back to SandboxPython when it cannot reach Docker, which can enable RCE through ar

security
Mar 30, 2026

CrewAI's CodeInterpreter tool has a security flaw where it falls back to SandboxPython when Docker (a containerization system for running code safely) is unavailable, which can allow RCE (remote code execution, where an attacker runs commands on a system they don't own) through arbitrary C function calling.

NVD/CVE Database
Prev1...9899100101102...371Next