aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
32
[LAST_7D]
172
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 153/275
VIEW ALL
01

CVE-2025-46152: In PyTorch before 2.7.0, bitwise_right_shift produces incorrect output for certain out-of-bounds values of the "other" a

security
Sep 25, 2025

CVE-2025-46152 is a bug in PyTorch (a machine learning library) versions before 2.7.0 where the bitwise_right_shift function (which moves binary digits to the right) produces wrong answers when given certain out-of-bounds values. This is classified as an out-of-bounds write vulnerability (CWE-787, where a program writes data outside its intended memory area).

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Fix: Upgrade PyTorch to version 2.7.0 or later.

NVD/CVE Database
02

CVE-2025-46150: In PyTorch before 2.7.0, when torch.compile is used, FractionalMaxPool2d has inconsistent results.

security
Sep 25, 2025

CVE-2025-46150 is a bug in PyTorch (a machine learning framework) versions before 2.7.0 where FractionalMaxPool2d (a function that reduces image dimensions) produces inconsistent results when torch.compile (a performance optimization tool) is used. The issue causes the function to give different outputs under the same conditions, which is problematic for machine learning models that need reproducible, reliable results.

Fix: Upgrade to PyTorch version 2.7.0 or later.

NVD/CVE Database
03

CVE-2025-46149: In PyTorch before 2.7.0, when inductor is used, nn.Fold has an assertion error.

security
Sep 25, 2025

CVE-2025-46149 is a bug in PyTorch (a machine learning library) versions before 2.7.0 where the nn.Fold function crashes with an assertion error when inductor (PyTorch's code optimization tool) is used. This is classified as a reachable assertion vulnerability, meaning the code reaches a safety check that fails unexpectedly.

Fix: Upgrade to PyTorch version 2.7.0 or later.

NVD/CVE Database
04

CVE-2025-46148: In PyTorch through 2.6.0, when eager is used, nn.PairwiseDistance(p=2) produces incorrect results.

security
Sep 25, 2025

PyTorch versions up to 2.6.0 have a bug where the nn.PairwiseDistance function (a tool that calculates distances between pairs of data points) produces wrong answers when using the p=2 parameter in eager mode (the default execution method). This is a correctness issue, meaning the calculation gives incorrect numerical results rather than causing a security breach.

NVD/CVE Database
05

CVE-2025-59828: Claude Code is an agentic coding tool. Prior to Claude Code version 1.0.39, when using Claude Code with Yarn versions 2.

security
Sep 24, 2025

Claude Code is a tool that uses AI to help write code, and it had a security flaw in versions before 1.0.39 where Yarn plugins (add-ons for a package manager) would run automatically when checking the version, bypassing Claude Code's trust dialog (a safety check asking users to confirm they trust a directory before working in it). This only affected users with Yarn versions 2.0 and newer, not those using the older Yarn Classic.

Fix: Update Claude Code to version 1.0.39 or later. Users with auto-update enabled will have received the fix automatically. Users updating manually should update to the latest version.

NVD/CVE Database
06

Cross-Agent Privilege Escalation: When Agents Free Each Other

securitysafety
Sep 24, 2025

Multiple AI coding agents (like GitHub Copilot and Claude Code) can write to each other's configuration files, allowing one compromised agent to modify another agent's settings through an indirect prompt injection (tricking an AI by hiding malicious instructions in its input). This creates a cross-agent privilege escalation, where one agent can 'free' another by giving it additional capabilities to break out of its sandbox (an isolated environment limiting what software can do) and execute arbitrary code.

Embrace The Red
07

AI Safety Newsletter #63: California’s SB-53 Passes the Legislature

policy
Sep 24, 2025

California's legislature passed SB-53, the 'Transparency in Frontier Artificial Intelligence Act,' which would make California the first US state to regulate catastrophic risk (foreseeable harms like weapons creation, cyberattacks, or loss of control that could kill over 50 people or cause over $1 billion in damage). The bill requires developers of frontier AI models (large, cutting-edge AI systems) to publish transparency reports on their systems' capabilities and risk assessments, update safety frameworks yearly, and report critical safety incidents to state emergency services.

Fix: SB-53 itself is the mitigation strategy described in the source. The bill requires frontier AI developers to: publish a frontier AI framework detailing capability thresholds and risk mitigations; review and update the framework annually with public disclosure of changes within 30 days; publish transparency reports for each new frontier model including technical specifications and catastrophic risk assessments; share catastrophic risk assessments from internal model use with California's Office of Emergency Services every 3 months; and refrain from misrepresenting catastrophic risks or compliance with their framework.

CAIS AI Safety Newsletter
08

Privacy-Preserving Automated Deep Learning for Secure Inference Service

securityprivacy
Sep 24, 2025

This research proposes 2PCAutoDL, a system for automatically designing deep neural networks (DNNs, which are AI models with many layers) while keeping data and model designs private by splitting computations between two separate cloud servers. The system balances security and speed by using specialized protocols (step-by-step procedures) for different types of network layers, achieving significant speedups compared to existing approaches while maintaining similar model accuracy.

IEEE Xplore (Security & AI Journals)
09

RDSAD: Robust Threat Detection in Evolving Data Streams via Adaptive Latent Dynamics

researchsecurity
Sep 24, 2025

RDSAD is an AI-based security system designed to detect cyberattacks on Cyber-Physical Systems (CPSs, which are machines that combine physical equipment with software to automate industrial processes). The system works without manual labeling and uses two techniques: one to understand how the system normally behaves, and another to adapt when patterns change, helping it catch attacks while avoiding false alarms.

IEEE Xplore (Security & AI Journals)
10

Supply chain attacks are exploiting our assumptions

security
Sep 24, 2025

Modern software development relies on implicit trust assumptions when installing packages through tools like cargo add or pip install, but attackers are systematically exploiting these assumptions through supply chain attacks (attacks that compromise software before it reaches developers). In 2024 alone, malicious packages were removed from package registries (centralized repositories for code), maintainers' accounts were compromised to publish malware, and critical infrastructure nearly had backdoors (hidden access points) inserted. Traditional defenses like dependency scanning (automated checks for known security flaws) only catch known vulnerabilities, missing attacks like typosquatting (creating packages with names similar to legitimate ones), compromised maintainers, and poisoned build pipelines (the automated systems that compile and package code).

Trail of Bits Blog
Prev1...151152153154155...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026