Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
Applio, a voice conversion tool, has a vulnerability in versions 3.2.8-bugfix and earlier where it unsafely deserializes (reconstructs objects from stored data without validation) user-supplied model files using `torch.load`, which could allow attackers to run arbitrary code on the affected system.
Fix: A patch is available in the `main` branch of the repository.
NVD/CVE DatabaseApplio, a voice conversion tool, has a vulnerability in versions 3.2.8-bugfix and earlier where it unsafely deserializes (converts untrusted data back into objects) user-supplied model files using `torch.load`, potentially allowing attackers to run arbitrary code on affected systems.
CVE-2025-29783 is a remote code execution vulnerability in vLLM (a software engine for running large language models efficiently) that occurs when it is configured with Mooncake, a distributed system component. Attackers can exploit unsafe deserialization (the process of converting stored data back into usable objects) exposed over ZMQ/TCP (network communication protocols) to run arbitrary code on any connected systems in a distributed setup.
vLLM, a system for running large language models efficiently, uses the outlines library to support structured output (guidance on what format the AI's answer should follow). The outlines library stores compiled grammar rules in a cache on the hard drive, which is turned on by default. A malicious user can send many requests with different output formats, filling up this cache and causing the system to run out of disk space, making it unavailable to others (a denial of service attack). This problem affects only the V0 engine version of vLLM.
SmartOS, a hypervisor (virtualization software that manages virtual machines) used in Triton Data Center and other products, contains static host SSH keys (unchanging cryptographic credentials for remote access) in a specific Debian 12 LX zone image from July 2024. This means multiple systems could potentially share the same SSH keys, allowing unauthorized remote access if someone obtains these keys.
Keras, a machine learning library, has a vulnerability in its Model.load_model function that allows attackers to run arbitrary code (code injection, where an attacker makes a program execute unintended commands) even when safety features are enabled. An attacker can create a malicious .keras file (a special archive format) and modify its config.json file to specify malicious Python code that runs when the model is loaded.
A vulnerability (CVE-2025-2149) was found in PyTorch 2.6.0+cu124 in the Quantized Sigmoid Module's nnq_Sigmoid function, where improper initialization (failing to set up values correctly) occurs when certain parameters are manipulated. The vulnerability requires local access (attacking from the same machine) and is difficult to exploit, with a low severity rating.
A critical vulnerability (CVE-2025-2148) was found in PyTorch 2.6.0+cu124 in a function called torch.ops.profiler._call_end_callbacks_on_jit_fut that handles tuples (groups of related data). When the function receives a None argument (a placeholder for "no value"), it causes memory corruption (where data stored in memory gets damaged or overwritten), and the attack can be launched remotely. However, the exploit is difficult to carry out and requires user interaction.
picklescan before version 0.0.23 can be tricked into missing malicious pickle files (serialized Python objects) hidden inside PyTorch model archives by modifying certain bits in ZIP file headers. An attacker can use this technique to embed code that runs automatically when someone loads the model with PyTorch, potentially taking over the user's system.
picklescan before version 0.0.23 has a vulnerability where an attacker can manipulate a ZIP archive (a compressed file format) by changing filenames in the ZIP header while keeping the original filename in the directory listing. This causes picklescan to crash with a BadZipFile error when trying to scan PyTorch model files (machine learning models), but PyTorch's more forgiving ZIP handler still loads the model anyway, allowing malicious code to bypass the security scanner.
The Aiomatic WordPress plugin (used to generate AI-written content and images) has a vulnerability in versions up to 2.3.8 that allows authenticated users with Contributor access or higher to upload any type of file to the server due to missing file type validation (checking what kind of file is being uploaded). This could potentially allow attackers to run malicious code on the affected website.
The Aiomatic WordPress plugin (used for AI-powered content writing) has a security flaw in versions up to 2.3.6 where it fails to check user permissions properly, allowing attackers with basic user accounts (Subscriber level and above) to perform dangerous actions like deleting posts, removing files, and clearing logs that they shouldn't be able to access. This vulnerability puts user data at risk of unauthorized modification or deletion.
A vulnerability (CVE-2025-1953) was found in vLLM AIBrix 0.2.0 in the Prefix Caching component (a feature that speeds up AI model processing by reusing cached data) that produces insufficiently random values, potentially compromising security. The vulnerability is rated as low severity and difficult to exploit, but it affects the cryptographic security of the system.
A cross-site scripting (XSS, where an attacker injects malicious code into a webpage to trick users) vulnerability was found in the ChatGPT Open AI Images & Content for WooCommerce plugin, affecting versions up to 2.2.0. The vulnerability allows attackers to inject harmful scripts through reflected XSS (where malicious input is immediately reflected back to the user without proper filtering).
CVE-2025-25185 is a vulnerability in GPT Academic (version 3.91 and earlier) where the software does not properly handle soft links (special files that point to other files). An attacker can create a malicious soft link, upload it in a compressed tar.gz file, and when the server decompresses it, the soft link will point to sensitive files on the victim's server, allowing the attacker to read all server files.
A vulnerability (CVE-2024-3303) was found in GitLab EE (a version control platform for managing code) that allows attackers to steal the contents of private issues through prompt injection (tricking the AI by hiding instructions in its input). The flaw affects multiple versions: 16.0 through 17.6.4, 17.7 through 17.7.3, and 17.8 through 17.8.1.
NVIDIA Triton Inference Server has a vulnerability where loading a model with an extremely large file size causes an integer overflow or wraparound error (a type of bug where a number gets too big for its storage space and wraps around to an incorrect value), potentially causing a denial of service (making the system unavailable). The vulnerability exists in the model loading API (the interface used to load AI models into the server).
PandasAI contains a vulnerability where its interactive prompt function can be exploited through prompt injection (tricking the AI by hiding instructions in its input), allowing attackers to run arbitrary Python code and achieve RCE (remote code execution, where an attacker can run commands on a system they don't own) instead of just getting explanations from the language model.
vLLM, a system for running large language models efficiently, has a vulnerability where attackers can craft malicious input to cause hash collisions (when two different inputs produce the same fingerprint value), allowing them to reuse cached data (stored computation results) from previous requests and corrupt subsequent responses. Python 3.12 made hash values predictable, making this attack easier to execute intentionally.
MDC is a tool that converts Markdown into documents that work with Vue components (a JavaScript framework for building user interfaces). In affected versions, the tool has a security flaw where it doesn't properly validate URLs in Markdown, allowing attackers to sneak in malicious JavaScript code by encoding it in a special format (hex-encoded HTML entities). This can lead to XSS (cross-site scripting, where unauthorized code runs in a user's browser) if the tool processes untrusted Markdown.
Fix: A patch is available on the `main` branch of the Applio repository.
NVD/CVE DatabaseFix: This vulnerability is fixed in vLLM version 0.8.0. Users should upgrade to this version or later.
NVD/CVE DatabaseFix: This issue is fixed in vLLM version 0.8.0.
NVD/CVE DatabaseFix: Upgrade picklescan to version 0.0.23 or later. The fix is available in commit e58e45e0d9e091159c1554f9b04828bbb40b9781 at https://github.com/mmaitre314/picklescan/commit/e58e45e0d9e091159c1554f9b04828bbb40b9781
NVD/CVE DatabaseFix: Upgrade picklescan to version 0.0.23 or later. The patch is available at https://github.com/mmaitre314/picklescan/commit/e58e45e0d9e091159c1554f9b04828bbb40b9781.
NVD/CVE DatabaseFix: The vulnerability was partially patched in version 2.3.5. Users should update to version 2.3.7 or later for a complete fix (though the source only explicitly mentions a partial patch in 2.3.5).
NVD/CVE DatabaseFix: Upgrade to vLLM AIBrix version 0.3.0, which addresses this issue.
NVD/CVE DatabaseFix: A patch is available at https://github.com/binary-husky/gpt_academic/commit/5dffe8627f681d7006cebcba27def038bb691949
NVD/CVE DatabaseFix: This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
NVD/CVE DatabaseFix: Upgrade to version 0.13.3 or later. The source states: 'This vulnerability has been addressed in version 0.13.3 and all users are advised to upgrade.'
NVD/CVE Database