aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3309 items

CVE-2024-12911: A vulnerability in the `default_jsonalyzer` function of the `JSONalyzeQueryEngine` in the run-llama/llama_index reposito

highvulnerability
security
Mar 20, 2025
CVE-2024-12911

CVE-2024-12911 is a vulnerability in the `default_jsonalyzer` function of `JSONalyzeQueryEngine` in the llama_index library that allows attackers to perform SQL injection (inserting malicious SQL commands) through prompt injection (hiding hidden instructions in the AI's input). This can lead to arbitrary file creation and denial-of-service attacks (making a system unavailable by overwhelming it).

Fix: The vulnerability is fixed in version 0.5.1 of llama_index. Users should upgrade to this version or later.

NVD/CVE Database

CVE-2024-12029: A remote code execution vulnerability exists in invoke-ai/invokeai versions 5.3.1 through 5.4.2 via the /api/v2/models/i

highvulnerability
security
Mar 20, 2025
CVE-2024-12029EPSS: 49.1%

CVE-2024-10950: In binary-husky/gpt_academic version <= 3.83, the plugin `CodeInterpreter` is vulnerable to code injection caused by pro

highvulnerability
security
Mar 20, 2025
CVE-2024-10950

In gpt_academic version 3.83 and earlier, the CodeInterpreter plugin has a vulnerability where prompt injection (tricking an AI by hiding instructions in its input) allows attackers to inject malicious code. Because the application executes LLM-generated code without a sandbox (a restricted environment that isolates code from the main system), attackers can achieve RCE (remote code execution, where an attacker can run commands on a system they don't own) and potentially take over the backend server.

CVE-2025-27781: Applio is a voice conversion tool. Versions 3.2.8-bugfix and prior are vulnerable to unsafe deserialization in inference

criticalvulnerability
security
Mar 19, 2025
CVE-2025-27781

Applio, a voice conversion tool, has a vulnerability in versions 3.2.8-bugfix and earlier where it unsafely deserializes (converts untrusted data back into code objects) user-supplied model file paths using torch.load, which can allow attackers to run arbitrary code on the system. The vulnerability exists in the inference.py and tts.py files, where user input is passed directly to functions that load models without proper validation.

CVE-2025-27780: Applio is a voice conversion tool. Versions 3.2.8-bugfix and prior are vulnerable to unsafe deserialization in model_inf

criticalvulnerability
security
Mar 19, 2025
CVE-2025-27780

Applio, a voice conversion tool, has a vulnerability in versions 3.2.8-bugfix and earlier where it unsafely deserializes (reconstructs objects from stored data without validation) user-supplied model files using `torch.load`, which could allow attackers to run arbitrary code on the affected system.

CVE-2025-27779: Applio is a voice conversion tool. Versions 3.2.8-bugfix and prior are vulnerable to unsafe deserialization in `model_bl

criticalvulnerability
security
Mar 19, 2025
CVE-2025-27779

Applio, a voice conversion tool, has a vulnerability in versions 3.2.8-bugfix and earlier where it unsafely deserializes (converts untrusted data back into objects) user-supplied model files using `torch.load`, potentially allowing attackers to run arbitrary code on affected systems.

CVE-2025-29783: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Moo

criticalvulnerability
security
Mar 19, 2025
CVE-2025-29783

CVE-2025-29783 is a remote code execution vulnerability in vLLM (a software engine for running large language models efficiently) that occurs when it is configured with Mooncake, a distributed system component. Attackers can exploit unsafe deserialization (the process of converting stored data back into usable objects) exposed over ZMQ/TCP (network communication protocols) to run arbitrary code on any connected systems in a distributed setup.

CVE-2025-29770: vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the

mediumvulnerability
security
Mar 19, 2025
CVE-2025-29770

vLLM, a system for running large language models efficiently, uses the outlines library to support structured output (guidance on what format the AI's answer should follow). The outlines library stores compiled grammar rules in a cache on the hard drive, which is turned on by default. A malicious user can send many requests with different output formats, filling up this cache and causing the system to run out of disk space, making it unavailable to others (a denial of service attack). This problem affects only the V0 engine version of vLLM.

CVE-2025-30234: SmartOS, as used in Triton Data Center and other products, has static host SSH keys in the 60f76fd2-143f-4f57-819b-1ae32

highvulnerability
security
Mar 19, 2025
CVE-2025-30234

SmartOS, a hypervisor (virtualization software that manages virtual machines) used in Triton Data Center and other products, contains static host SSH keys (unchanging cryptographic credentials for remote access) in a specific Debian 12 LX zone image from July 2024. This means multiple systems could potentially share the same SSH keys, allowing unauthorized remote access if someone obtains these keys.

CVE-2025-29780: Post-Quantum Secure Feldman's Verifiable Secret Sharing provides a Python implementation of Feldman's Verifiable Secret

infovulnerability
security
Mar 14, 2025
CVE-2025-29780

CVE-2025-29780 is a timing side-channel vulnerability (a security flaw where an attacker measures how long code takes to run to extract secrets) in the feldman_vss Python library versions 0.8.0b2 and earlier. The vulnerability exists in matrix operation functions that don't execute in constant time, potentially allowing an attacker to recover secret information through careful timing measurements of repeated function calls.

v4.8.0

inforesearchIndustry
industry

Sneaky Bits: Advanced Data Smuggling Techniques (ASCII Smuggler Updates)

infonews
securityresearch

CVE-2025-1550: The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually const

criticalvulnerability
security
Mar 11, 2025
CVE-2025-1550

Keras, a machine learning library, has a vulnerability in its Model.load_model function that allows attackers to run arbitrary code (code injection, where an attacker makes a program execute unintended commands) even when safety features are enabled. An attacker can create a malicious .keras file (a special archive format) and modify its config.json file to specify malicious Python code that runs when the model is loaded.

CVE-2025-2149: A vulnerability was found in PyTorch 2.6.0+cu124. It has been rated as problematic. Affected by this issue is the functi

lowvulnerability
security
Mar 10, 2025
CVE-2025-2149

A vulnerability (CVE-2025-2149) was found in PyTorch 2.6.0+cu124 in the Quantized Sigmoid Module's nnq_Sigmoid function, where improper initialization (failing to set up values correctly) occurs when certain parameters are manipulated. The vulnerability requires local access (attacking from the same machine) and is difficult to exploit, with a low severity rating.

CVE-2025-2148: A vulnerability was found in PyTorch 2.6.0+cu124. It has been declared as critical. Affected by this vulnerability is th

mediumvulnerability
security
Mar 10, 2025
CVE-2025-2148

A critical vulnerability (CVE-2025-2148) was found in PyTorch 2.6.0+cu124 in a function called torch.ops.profiler._call_end_callbacks_on_jit_fut that handles tuples (groups of related data). When the function receives a None argument (a placeholder for "no value"), it causes memory corruption (where data stored in memory gets damaged or overwritten), and the attack can be launched remotely. However, the exploit is difficult to carry out and requires user interaction.

CVE-2025-1945: picklescan before 0.0.23 fails to detect malicious pickle files inside PyTorch model archives when certain ZIP file flag

criticalvulnerability
security
Mar 10, 2025
CVE-2025-1945

picklescan before version 0.0.23 can be tricked into missing malicious pickle files (serialized Python objects) hidden inside PyTorch model archives by modifying certain bits in ZIP file headers. An attacker can use this technique to embed code that runs automatically when someone loads the model with PyTorch, potentially taking over the user's system.

CVE-2025-1944: picklescan before 0.0.23 is vulnerable to a ZIP archive manipulation attack that causes it to crash when attempting to e

mediumvulnerability
security
Mar 10, 2025
CVE-2025-1944

picklescan before version 0.0.23 has a vulnerability where an attacker can manipulate a ZIP archive (a compressed file format) by changing filenames in the ZIP header while keeping the original filename in the directory listing. This causes picklescan to crash with a BadZipFile error when trying to scan PyTorch model files (machine learning models), but PyTorch's more forgiving ZIP handler still loads the model anyway, allowing malicious code to bypass the security scanner.

CVE-2024-13882: The Aiomatic - Automatic AI Content Writer & Editor, GPT-3 & GPT-4, ChatGPT ChatBot & AI Toolkit plugin for WordPress is

highvulnerability
security
Mar 8, 2025
CVE-2024-13882

The Aiomatic WordPress plugin (used to generate AI-written content and images) has a vulnerability in versions up to 2.3.8 that allows authenticated users with Contributor access or higher to upload any type of file to the server due to missing file type validation (checking what kind of file is being uploaded). This could potentially allow attackers to run malicious code on the affected website.

CVE-2024-13816: The Aiomatic - Automatic AI Content Writer & Editor, GPT-3 & GPT-4, ChatGPT ChatBot & AI Toolkit plugin for WordPress is

mediumvulnerability
security
Mar 8, 2025
CVE-2024-13816

The Aiomatic WordPress plugin (used for AI-powered content writing) has a security flaw in versions up to 2.3.6 where it fails to check user permissions properly, allowing attackers with basic user accounts (Subscriber level and above) to perform dangerous actions like deleting posts, removing files, and clearing logs that they shouldn't be able to access. This vulnerability puts user data at risk of unauthorized modification or deletion.

AI Safety Newsletter #49: Superintelligence Strategy

infonews
policysafety
Previous104 / 166Next

InvokeAI versions 5.3.1 through 5.4.2 contain a remote code execution vulnerability (the ability for attackers to run commands on a system they don't own) in the model installation API. The flaw comes from unsafe deserialization (converting data back into usable code without checking if it's trustworthy) of model files using torch.load, which allows attackers to hide malicious code in model files that gets executed when loaded.

Fix: This issue is fixed in version 5.4.3. Users should update to version 5.4.3 or later.

NVD/CVE Database
NVD/CVE Database

Fix: A patch is available on the `main` branch of the repository.

NVD/CVE Database

Fix: A patch is available in the `main` branch of the repository.

NVD/CVE Database

Fix: A patch is available on the `main` branch of the Applio repository.

NVD/CVE Database

Fix: This vulnerability is fixed in vLLM version 0.8.0. Users should upgrade to this version or later.

NVD/CVE Database

Fix: This issue is fixed in vLLM version 0.8.0.

NVD/CVE Database
NVD/CVE Database

Fix: As of publication, no patched versions exist. The source text recommends three mitigations: (1) short term, use this library only in environments where attackers cannot measure execution timing; (2) medium term, create custom wrappers around critical operations using constant-time libraries in Rust, Go, or C; (3) long term, wait for the planned Rust implementation mentioned in the library documentation that will properly address these issues.

NVD/CVE Database
Mar 14, 2025

This content is a product navigation page for GitHub v4.8.0, listing features related to AI code creation, developer workflows, application security, and enterprise solutions. It does not contain technical information about a specific AI or LLM vulnerability, bug, or security issue.

MITRE ATLAS Releases
Mar 12, 2025

Researchers have discovered advanced data smuggling techniques using invisible Unicode characters (invisible text that computers can process but humans cannot see) to hide information in LLM inputs and outputs. The technique, called Sneaky Bits, can encode any character or sequence of bytes using only two invisible characters, building on earlier methods that used Unicode Tags and Variant Selectors to inject hidden instructions into AI systems.

Embrace The Red
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database

Fix: Upgrade picklescan to version 0.0.23 or later. The fix is available in commit e58e45e0d9e091159c1554f9b04828bbb40b9781 at https://github.com/mmaitre314/picklescan/commit/e58e45e0d9e091159c1554f9b04828bbb40b9781

NVD/CVE Database

Fix: Upgrade picklescan to version 0.0.23 or later. The patch is available at https://github.com/mmaitre314/picklescan/commit/e58e45e0d9e091159c1554f9b04828bbb40b9781.

NVD/CVE Database
NVD/CVE Database

Fix: The vulnerability was partially patched in version 2.3.5. Users should update to version 2.3.7 or later for a complete fix (though the source only explicitly mentions a partial patch in 2.3.5).

NVD/CVE Database
Mar 6, 2025

A new policy paper called 'Superintelligence Strategy' proposes that advanced AI systems surpassing human capabilities in most areas pose national security risks requiring a three-part approach: deterrence (using threat of retaliation to prevent AI dominance races), nonproliferation (restricting advanced AI access to non-state actors like terrorist groups), and competitiveness (building AI strength domestically). The deterrence strategy, called Mutual Assured AI Malfunction (MAIM), mirrors nuclear strategy by threatening cyberattacks on destabilizing AI projects to prevent any single country from gaining dangerous AI superiority.

Fix: The paper explicitly proposes three nonproliferation measures: Compute Security (governments track and monitor high-end AI chips to prevent smuggling), Information Security (AI model weights, which are the trained parameters that define how an AI behaves, are protected like classified intelligence), and AI Security (developers implement technical safety measures to detect and prevent misuse, similar to how DNA synthesis services block orders for dangerous bioweapon sequences).

CAIS AI Safety Newsletter