aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 93/371
VIEW ALL
01

Claude’s code: Anthropic leaks source code for AI software engineering tool

securityprivacy
Apr 1, 2026

Anthropic accidentally leaked nearly 2,000 internal files and 500,000 lines of code for its Claude Code AI tool due to human error, when an internal file was mistakenly included in a software update and pointed to an archive that was quickly copied to GitHub. The leaked source code spread widely on social media and became GitHub's fastest-ever downloaded repository before Anthropic issued copyright takedown requests to limit its distribution.

Fix: Anthropic issued copyright takedown requests to try to contain the code's spread.

The Guardian Technology
02

CVE-2026-34447: Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0,

security
Apr 1, 2026

ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) versions before 1.21.0 have a symlink traversal vulnerability (a flaw where attackers can follow symbolic links to access files outside the intended model directory), allowing unauthorized reading of files outside the model directory. This vulnerability affects how ONNX loads external data when processing models.

Fix: This issue has been patched in version 1.21.0. Users should upgrade to ONNX version 1.21.0 or later.

NVD/CVE Database
03

CVE-2026-34446: Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0,

security
Apr 1, 2026

ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) has a security flaw in versions before 1.21.0 where its file-loading function checks for symlinks (shortcuts to files) but misses hardlinks (alternate names pointing to the same file), allowing attackers to bypass path traversal protections (restrictions that prevent accessing files outside an intended folder).

Fix: Update ONNX to version 1.21.0 or later, where this issue has been patched.

NVD/CVE Database
04

CVE-2026-34445: Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0,

security
Apr 1, 2026

ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) had a vulnerability in versions before 1.21.0 where it didn't properly validate data loaded from model files, allowing an attacker to craft a malicious model that could overwrite internal object properties. An attacker could exploit this by embedding specially crafted metadata (like file paths) into an ONNX model file that would be processed without proper checks.

Fix: Update ONNX to version 1.21.0 or later, where this issue has been patched.

NVD/CVE Database
05

CVE-2026-27489: Open Neural Network Exchange (ONNX) is an open standard for machine learning interoperability. Prior to version 1.21.0,

security
Apr 1, 2026

ONNX (Open Neural Network Exchange, a standard format for sharing machine learning models) versions before 1.21.0 have a path traversal vulnerability via symlink (a shortcut that points to files outside its intended folder), allowing attackers to read arbitrary files outside the model or user-provided directory. This vulnerability has a CVSS score (0-10 severity rating) of 8.7, indicating high severity.

Fix: Update to ONNX version 1.21.0 or later, where this issue has been patched.

NVD/CVE Database
06

Vim and GNU Emacs: Claude Code helpfully found zero-day exploits for both

securityresearch
Apr 1, 2026

Researcher Hung Nguyen used Anthropic's Claude Code (an AI tool for analyzing code) to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs, two widely-used text editors. Claude Code found vulnerabilities that would allow attackers to execute arbitrary code (run commands they don't control) simply by tricking users into opening malicious files, and even generated proof-of-concept exploits (working examples of attacks) within minutes.

Fix: For Vim: The vulnerability (CVE-2026-34714, CVSS score 9.2) was fixed by the maintainers in version 9.2.0272. For GNU Emacs: The source text states that GNU Emacs maintainers declined to address the issue and believes it to be a problem with Git instead; Nguyen suggests manual mitigations but the source does not explicitly describe what those mitigations are.

CSO Online
07

Webinar Today: Agentic AI vs. Identity’s Last Mile Problem

securityindustry
Apr 1, 2026

This webinar discusses agentic AI (AI systems that can plan and take actions independently to complete tasks), its current capabilities and limitations, and how disconnected applications create identity security vulnerabilities that have led to real breaches. The event explores the 'last mile problem' in identity security, which refers to the final challenge of verifying user identity across systems that don't communicate well with each other.

SecurityWeek
08

Block the Prompt, Not the Work: The End of "Doctor No"

securitypolicy
Apr 1, 2026

Traditional enterprise security approaches that simply block access to AI tools and websites create a "Workaround Economy" where employees bypass controls through unmanaged alternatives like personal email or browser extensions, resulting in zero organizational visibility and increased risk. The article argues that blocking tools is ineffective because security tools like firewalls and endpoint agents (software that monitors device activity) either break user experience or remain blind to threats like browser extensions harvesting data, as illustrated by a law firm that blocked DeepSeek but discovered 70% of users had installed invisible AI wrapper extensions routing traffic overseas.

The Hacker News
09

AI can push your Stream Deck buttons for you

industry
Apr 1, 2026

Elgato's Stream Deck 7.4 software update now supports MCP (Model Context Protocol, a standard that lets AI assistants interact with software tools), allowing AI chatbots like Claude and ChatGPT to automatically activate Stream Deck buttons instead of requiring manual button presses. Users can now request actions through voice or text, and the AI will trigger the corresponding Stream Deck functions.

The Verge (AI)
10

Prompting Frameworks for Large Language Models: A Survey

research
Apr 1, 2026

This is an academic survey paper that reviews different prompting frameworks, which are structured approaches to asking large language models (AI systems trained on huge amounts of text) questions or giving them instructions to complete tasks. The paper, published in a major computer science journal, catalogues and analyzes various methods researchers have developed to improve how effectively people interact with and get useful results from LLMs.

ACM Digital Library (TOPS, DTRAP, CSUR)
Prev1...9192939495...371Next