aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 237/371
VIEW ALL
01

Human-Inspired Scene Understanding: A Grounded Cognition Method for Unbiased Scene Graph Generation

research
Nov 21, 2025

Scene Graph Generation (SGG, a method that identifies objects and their relationships in images) is limited by long-tailed bias, where the AI model performs well on common relationships but poorly on rare ones. This paper proposes a Grounded Cognition Method (GCM) that mimics human thinking by using techniques like Out Domain Knowledge Injection to broaden visual understanding, a Semantic Group Aware Synthesizer to organize relationship categories, modality erasure (removing one type of input at a time) to improve robustness, and a Shapley Enhanced Multimodal Counterfactual module to handle diverse contexts.

IEEE Xplore (Security & AI Journals)
02

Rethinking Rotation-Invariant Recognition of Fine-Grained Shapes From the Perspective of Contour Points

research
Nov 21, 2025

This research addresses the problem of recognizing shapes that have been rotated at different angles in computer vision (the field of teaching computers to understand images). The authors propose a new method that focuses on analyzing the outline or contour points of shapes rather than individual pixels, and they use a special neural network module to identify geometric patterns in these contours while ignoring rotation. Their approach shows better results than previous methods, especially for complex shapes, and it works even when the contour data is slightly noisy or imperfect.

IEEE Xplore (Security & AI Journals)
03

CVE-2025-62426: vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/c

security
Nov 21, 2025

vLLM is a tool that runs large language models and serves them to users. In versions 0.5.5 through 0.11.0, two API endpoints accept a parameter called chat_template_kwargs that isn't properly checked before being used, allowing attackers to send specially crafted requests that freeze the server and prevent other users' requests from being processed.

Fix: Update to vLLM version 0.11.1 or later, where this issue has been patched.

NVD/CVE Database
04

CVE-2025-62372: vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can

security
Nov 21, 2025

vLLM (an inference and serving engine for large language models) versions 0.5.5 through 0.11.0 have a vulnerability where users can crash the engine by sending multimodal embedding inputs (data that combines multiple types of information, like images and text) with incorrect shape parameters, even if the model doesn't support such inputs. This bug has a CVSS score of 8.3 (a 0-10 scale measuring vulnerability severity), indicating it's a high-severity issue.

Fix: This issue has been patched in version 0.11.1. Users should upgrade to vLLM version 0.11.1 or later.

NVD/CVE Database
05

CVE-2025-62164: vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memor

security
Nov 21, 2025

vLLM versions 0.10.2 through 0.11.0 have a vulnerability in how they process user-supplied prompt embeddings (numerical representations of text). An attacker can craft malicious data that bypasses safety checks and causes memory corruption (writing data to the wrong location in computer memory), which can crash the system or potentially allow remote code execution (RCE, where an attacker runs commands on the server).

Fix: Update to vLLM version 0.11.1 or later. The source states: 'This issue has been patched in version 0.11.1.'

NVD/CVE Database
06

CVE-2025-64755: Claude Code is an agentic coding tool. Prior to version 2.0.31, due to an error in sed command parsing, it was possible

security
Nov 20, 2025

Claude Code is an agentic coding tool (a program that can write code automatically) that had a vulnerability before version 2.0.31 where a mistake in how it parsed sed commands (a tool for editing text) allowed attackers to bypass safety checks and write files anywhere on a computer system. This vulnerability has been fixed.

Fix: Update to version 2.0.31 or later. The issue has been patched in version 2.0.31.

NVD/CVE Database
07

CVE-2025-64660: Improper access control in GitHub Copilot and Visual Studio Code allows an authorized attacker to execute code over a ne

security
Nov 20, 2025

CVE-2025-64660 is a vulnerability in GitHub Copilot and Visual Studio Code that involves improper access control (a flaw in how the software checks who is allowed to do what), allowing an authorized attacker to execute code over a network. The vulnerability has a CVSS 4.0 severity rating (a 0-10 scale measuring how serious a vulnerability is). This means someone with legitimate access to these tools could potentially run malicious code remotely.

NVD/CVE Database
08

CVE-2025-65099: Claude Code is an agentic coding tool. Prior to version 1.0.39, when running on a machine with Yarn 3.0 or above, Claude

security
Nov 19, 2025

Claude Code, an agentic coding tool (software that can write and execute code), had a vulnerability before version 1.0.39 where it could run code from yarn plugins (add-ons for the Yarn package manager) before asking the user for permission, but only on machines with Yarn 3.0 or newer. This attack required tricking a user into opening Claude Code in an untrusted directory (a folder with malicious code).

Fix: Update Claude Code to version 1.0.39 or later. The source states: 'This issue has been patched in version 1.0.39.'

NVD/CVE Database
09

Level up your Solidity LLM tooling with Slither-MCP

industry
Nov 15, 2025

Slither-MCP is a new tool that connects LLMs (large language models) with Slither's static analysis engine (a tool that examines code without running it to find bugs), making it easier for AI systems to analyze and audit smart contracts written in Solidity (a programming language for blockchain). Instead of using basic search tools, LLMs can now directly ask Slither to find function implementations and security issues more accurately and efficiently.

Trail of Bits Blog
10

CVE-2025-63396: An issue was discovered in PyTorch v2.5 and v2.7.1. Omission of profiler.stop() can cause torch.profiler.profile (Python

security
Nov 12, 2025

PyTorch versions 2.5 and 2.7.1 have a bug where forgetting to call profiler.stop() can cause torch.profiler.profile (a Python tool that measures code performance) to crash or hang, resulting in a Denial of Service (DoS, where a system becomes unavailable). The underlying issue involves improper locking (a mechanism that controls how multiple processes access shared resources).

NVD/CVE Database
Prev1...235236237238239...371Next