aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3263 items

CVE-2025-46152: In PyTorch before 2.7.0, bitwise_right_shift produces incorrect output for certain out-of-bounds values of the "other" a

mediumvulnerability
security
Sep 25, 2025
CVE-2025-46152

CVE-2025-46152 is a bug in PyTorch (a machine learning library) versions before 2.7.0 where the bitwise_right_shift function (which moves binary digits to the right) produces wrong answers when given certain out-of-bounds values. This is classified as an out-of-bounds write vulnerability (CWE-787, where a program writes data outside its intended memory area).

Fix: Upgrade PyTorch to version 2.7.0 or later.

NVD/CVE Database

CVE-2025-46150: In PyTorch before 2.7.0, when torch.compile is used, FractionalMaxPool2d has inconsistent results.

mediumvulnerability
security
Sep 25, 2025
CVE-2025-46150

CVE-2025-46150 is a bug in PyTorch (a machine learning framework) versions before 2.7.0 where FractionalMaxPool2d (a function that reduces image dimensions) produces inconsistent results when torch.compile (a performance optimization tool) is used. The issue causes the function to give different outputs under the same conditions, which is problematic for machine learning models that need reproducible, reliable results.

CVE-2025-46149: In PyTorch before 2.7.0, when inductor is used, nn.Fold has an assertion error.

mediumvulnerability
security
Sep 25, 2025
CVE-2025-46149

CVE-2025-46149 is a bug in PyTorch (a machine learning library) versions before 2.7.0 where the nn.Fold function crashes with an assertion error when inductor (PyTorch's code optimization tool) is used. This is classified as a reachable assertion vulnerability, meaning the code reaches a safety check that fails unexpectedly.

CVE-2025-46148: In PyTorch through 2.6.0, when eager is used, nn.PairwiseDistance(p=2) produces incorrect results.

mediumvulnerability
security
Sep 25, 2025
CVE-2025-46148

PyTorch versions up to 2.6.0 have a bug where the nn.PairwiseDistance function (a tool that calculates distances between pairs of data points) produces wrong answers when using the p=2 parameter in eager mode (the default execution method). This is a correctness issue, meaning the calculation gives incorrect numerical results rather than causing a security breach.

Efficient Instruction Vulnerability Prediction With Heterogeneous SDC Propagation Knowledge Graph

inforesearchPeer-Reviewed
research

Hecate: Threshold Anonymous Credentials With Private Verifiers and Issuer-Hiding

inforesearchPeer-Reviewed
security

CVE-2025-59828: Claude Code is an agentic coding tool. Prior to Claude Code version 1.0.39, when using Claude Code with Yarn versions 2.

criticalvulnerability
security
Sep 24, 2025
CVE-2025-59828

Claude Code is a tool that uses AI to help write code, and it had a security flaw in versions before 1.0.39 where Yarn plugins (add-ons for a package manager) would run automatically when checking the version, bypassing Claude Code's trust dialog (a safety check asking users to confirm they trust a directory before working in it). This only affected users with Yarn versions 2.0 and newer, not those using the older Yarn Classic.

Cross-Agent Privilege Escalation: When Agents Free Each Other

highnews
securitysafety

CVE-2025-27032: memory corruption while loading a PIL authenticated VM, when authenticated VM image is loaded without maintaining cache

highvulnerability
security
Sep 24, 2025
CVE-2025-27032

CVE-2025-27032 is a memory corruption bug in Qualcomm systems that occurs when a PIL authenticated VM (a virtual machine protected with Qualcomm's authentication system) is loaded without maintaining cache coherency (keeping copies of data in different storage locations synchronized). This vulnerability allows improper access to memory regions that should be protected.

AI Safety Newsletter #63: California’s SB-53 Passes the Legislature

inforegulatory
policy
Sep 24, 2025

California's legislature passed SB-53, the 'Transparency in Frontier Artificial Intelligence Act,' which would make California the first US state to regulate catastrophic risk (foreseeable harms like weapons creation, cyberattacks, or loss of control that could kill over 50 people or cause over $1 billion in damage). The bill requires developers of frontier AI models (large, cutting-edge AI systems) to publish transparency reports on their systems' capabilities and risk assessments, update safety frameworks yearly, and report critical safety incidents to state emergency services.

OCEAN: Optional Capability-Based En Route Acknowledgement in Network Layer

inforesearchPeer-Reviewed
security

Anti-Spoofing and Mask-Supported Face Authentication Using mmWave Without On-Site Registration

inforesearchPeer-Reviewed
security

RDSAD: Robust Threat Detection in Evolving Data Streams via Adaptive Latent Dynamics

inforesearchPeer-Reviewed
research

Privacy-Preserving Automated Deep Learning for Secure Inference Service

inforesearchPeer-Reviewed
security

Charging Into Your Privacy: Indirect Privacy Leakage Attack Using a Laptop Charger

inforesearchPeer-Reviewed
security

Forseti: A Decentralized Permission Transfer Framework for IoT Leasing

inforesearchPeer-Reviewed
security

Supply chain attacks are exploiting our assumptions

infonews
security
Sep 24, 2025

Modern software development relies on implicit trust assumptions when installing packages through tools like cargo add or pip install, but attackers are systematically exploiting these assumptions through supply chain attacks (attacks that compromise software before it reaches developers). In 2024 alone, malicious packages were removed from package registries (centralized repositories for code), maintainers' accounts were compromised to publish malware, and critical infrastructure nearly had backdoors (hidden access points) inserted. Traditional defenses like dependency scanning (automated checks for known security flaws) only catch known vulnerabilities, missing attacks like typosquatting (creating packages with names similar to legitimate ones), compromised maintainers, and poisoned build pipelines (the automated systems that compile and package code).

CVE-2025-6921: The huggingface/transformers library, versions prior to 4.53.0, is vulnerable to Regular Expression Denial of Service (R

highvulnerability
security
Sep 23, 2025
CVE-2025-6921

The huggingface/transformers library before version 4.53.0 has a vulnerability where malicious regular expressions (patterns used to match text) in certain settings can cause ReDoS (regular expression denial of service, a type of attack that makes a system use 100% CPU and become unresponsive). An attacker who can control these regex patterns in the AdamWeightDecay optimizer (a tool that helps train machine learning models) can make the system hang and stop working.

Toward Resisting Black-Box Attacks: A Robust Coverless Image Steganography Based on Hierarchical CID and Dual SIFT

inforesearchPeer-Reviewed
research

OptiVersa-ECDSA: Fast Threshold-ECDSA With Cheater Identification for Blockchains

inforesearchPeer-Reviewed
research
Previous84 / 164Next

Fix: Upgrade to PyTorch version 2.7.0 or later.

NVD/CVE Database

Fix: Upgrade to PyTorch version 2.7.0 or later.

NVD/CVE Database
NVD/CVE Database
Sep 25, 2025

Silent Data Corruption (SDC, where a computer system produces wrong outputs without alerting anyone) is a growing problem in modern chip designs, but current detection methods are inefficient or inaccurate. Researchers proposed VP-HPKG, a new approach that uses a knowledge graph (a map of how instructions relate to each other) combined with neural network techniques to predict which instructions are vulnerable to SDC and detect error propagation paths more efficiently than existing methods.

IEEE Xplore (Security & AI Journals)
Sep 25, 2025

Hecate is a framework for anonymous credentials (a system allowing users to prove they have certain attributes without revealing their identity) that adds protection for verifiers, the entities checking credentials, while maintaining threshold issuance (requiring multiple parties to approve a credential) and issuer-hiding (hiding which organization issued the credential). The system uses a dual-credential design to let both verifiers and users set policies about who can access information, and testing shows it can verify credentials quickly, in about 37-60 milliseconds.

IEEE Xplore (Security & AI Journals)

Fix: Update Claude Code to version 1.0.39 or later. Users with auto-update enabled will have received the fix automatically. Users updating manually should update to the latest version.

NVD/CVE Database
Sep 24, 2025

Multiple AI coding agents (like GitHub Copilot and Claude Code) can write to each other's configuration files, allowing one compromised agent to modify another agent's settings through an indirect prompt injection (tricking an AI by hiding malicious instructions in its input). This creates a cross-agent privilege escalation, where one agent can 'free' another by giving it additional capabilities to break out of its sandbox (an isolated environment limiting what software can do) and execute arbitrary code.

Embrace The Red
NVD/CVE Database

Fix: SB-53 itself is the mitigation strategy described in the source. The bill requires frontier AI developers to: publish a frontier AI framework detailing capability thresholds and risk mitigations; review and update the framework annually with public disclosure of changes within 30 days; publish transparency reports for each new frontier model including technical specifications and catastrophic risk assessments; share catastrophic risk assessments from internal model use with California's Office of Emergency Services every 3 months; and refrain from misrepresenting catastrophic risks or compliance with their framework.

CAIS AI Safety Newsletter
Sep 24, 2025

OCEAN is a security system designed for Industrial IoT (the use of connected devices in factories and industrial settings) that aims to prevent packet loss (data getting dropped during transmission) while keeping data transmission fast and secure. It uses specialized hardware (an ASIC and FPGA, which are types of programmable computer chips) combined with a network protocol (set of rules for how data moves between devices) that verifies packets at each hop and caches (temporarily stores) them until receiving confirmation they arrived safely.

IEEE Xplore (Security & AI Journals)
Sep 24, 2025

This research presents mmFace, a face authentication system that uses millimeter wave radar (mmWave, radio signals that can penetrate materials and detect fine details) instead of cameras to verify a person's identity while resisting spoofing attacks (fake faces or replayed recordings). The system works even when users wear masks because mmWave signals can pass through them, and it uses techniques like liveness detection (checking that a face is real and alive) and amplitude modulation-based methods to prevent attackers from fooling it with fake faces or recorded videos.

IEEE Xplore (Security & AI Journals)
security
Sep 24, 2025

RDSAD is an AI-based security system designed to detect cyberattacks on Cyber-Physical Systems (CPSs, which are machines that combine physical equipment with software to automate industrial processes). The system works without manual labeling and uses two techniques: one to understand how the system normally behaves, and another to adapt when patterns change, helping it catch attacks while avoiding false alarms.

IEEE Xplore (Security & AI Journals)
privacy
Sep 24, 2025

This research proposes 2PCAutoDL, a system for automatically designing deep neural networks (DNNs, which are AI models with many layers) while keeping data and model designs private by splitting computations between two separate cloud servers. The system balances security and speed by using specialized protocols (step-by-step procedures) for different types of network layers, achieving significant speedups compared to existing approaches while maintaining similar model accuracy.

IEEE Xplore (Security & AI Journals)
Sep 24, 2025

Researchers discovered a side-channel attack (a method of extracting secret information by analyzing physical properties like power usage rather than breaking encryption directly) called PrivateCharger that can infer what a user is doing on their laptop by analyzing magnetic field signals from the laptop charger from a distance. The attack works with commercially available equipment, requires no physical access to the laptop, and achieved 84.6% accuracy at certain battery levels, revealing that everyday peripherals can leak private information in ways previously not considered.

IEEE Xplore (Security & AI Journals)
Sep 24, 2025

IoT devices used in rental situations like Airbnbs need secure ways to transfer permission (access rights) from owners to renters, but current systems don't properly prevent problems like a malicious owner keeping camera access after handing it over. Forseti is a new authorization framework that uses zero-knowledge proof (a cryptographic method proving something is true without revealing the details) and a decentralized ledger (a shared, distributed record not controlled by any single party) to protect both owners' and renters' control over devices during permission transfers.

Fix: The source presents Forseti as a proposed solution framework that 'leverages zero-knowledge proof and a decentralized ledger to ensure that the rights of both hosts and tenants are not violated.' However, the source does not describe a specific implementation step, patch, update, or deployment procedure that users can apply.

IEEE Xplore (Security & AI Journals)
Trail of Bits Blog

Fix: Update to huggingface/transformers version 4.53.0 or later.

NVD/CVE Database
Sep 23, 2025

This research paper presents a new method for coverless image steganography (CIS, a technique to hide secret information inside images without visibly altering them), designed to resist black-box attacks (attacks where an attacker can't see how the system works, only its outputs). The method uses SIFT (Scale-Invariant Feature Transform, an algorithm that identifies distinctive points in images), to create a dataset and mapping structure that hides data more securely and with greater capacity than previous CIS methods.

IEEE Xplore (Security & AI Journals)
Sep 23, 2025

OptiVersa-ECDSA is a new cryptographic protocol that improves threshold-ECDSA (a method where multiple parties must cooperate to sign blockchain transactions securely). The protocol uses novel techniques called verifiable secret-product sharing (VSPS, a way to distribute and verify secret values) to achieve 35-65% faster performance and 99% improvement in cheater identification compared to previous approaches, making it practical for real-time blockchain use.

IEEE Xplore (Security & AI Journals)