Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
FastGPT (an AI platform for building AI agents) versions 4.14.8.3 and below have a critical security flaw where the fastgpt-preview-image.yml workflow uses pull_request_target (a GitHub feature that runs code with access to repository secrets) but executes code from an external contributor's fork, allowing attackers to run arbitrary code (commands on systems they don't own), steal secrets, and potentially compromise the production container registry (the central storage system for packaged software).
MLflow, a machine learning platform, has a vulnerability (CVE-2025-15031) in how it extracts model files from compressed archives. The issue is that the software uses `tarfile.extractall` (a Python function that unpacks compressed tar files) without checking whether file paths are safe, allowing attackers to use specially crafted archives with `..` (parent directory references) or absolute paths to write files outside the intended folder. This could let attackers overwrite files or execute malicious code, especially in shared environments or when processing untrusted model files.
ONNX's onnx.hub.load() function has a security flaw where the silent=True parameter completely disables warnings and user confirmations when loading models from untrusted repositories (sources not officially verified). This means an attacker could trick an application into silently downloading and running malicious models from their own GitHub repository without the user knowing, potentially allowing theft of sensitive files like SSH keys or cloud credentials.
MLflow versions before v3.7.0 contain a command injection vulnerability (a flaw where attackers insert malicious commands into input that gets executed) in the sagemaker module. An attacker can exploit this by passing a malicious container image name through the `--container` parameter, which the software unsafely inserts into shell commands and runs, allowing arbitrary command execution on affected systems.
FastGPT, an AI Agent building platform, has a vulnerability in its Python Sandbox (fastgpt-sandbox) in version 4.14.7 and earlier where attackers can bypass file-write protections by remapping stdout (the standard output stream) to a different file descriptor using fcntl (a tool for controlling file operations), allowing them to create or overwrite files inside the sandbox container despite intended restrictions.
NLTK (a natural language processing library) versions up to 3.9.2 have a vulnerability called path traversal (where an attacker manipulates file paths to access files outside intended directories) in its CorpusReader classes. This allows attackers to read sensitive files on a server when the library processes user-provided file paths, potentially exposing private keys and tokens.
CVE-2026-2256 is a command injection vulnerability (a flaw where an attacker tricks a program into running unwanted operating system commands) in ModelScope's ms-agent software versions v1.6.0rc1 and earlier. An attacker can exploit this by sending specially crafted prompts to execute arbitrary commands on the affected system.
Gradio, a Python package for building AI demos, had a vulnerability (SSRF, or server-side request forgery, where an attacker tricks a server into making requests it shouldn't) before version 6.6.0 that let attackers access internal services and private networks by hosting a malicious Gradio Space that victims load with the `gr.load()` function.
Gradio, a Python package for building AI interfaces quickly, has a vulnerability in versions before 6.6.0 where the _redirect_to_target() function doesn't validate the _target_url parameter, allowing attackers to redirect users to malicious external websites through the /logout and /login/callback endpoints on apps using OAuth (a login system). This vulnerability only affects Gradio apps running on Hugging Face Spaces with gr.LoginButton enabled.
Gradio, a Python package for building web interfaces, has a security flaw in versions 4.16.0 through 6.5.x where it automatically enables fake OAuth routes (authentication shortcuts) that accidentally expose the server owner's Hugging Face access token (a credential used to authenticate with Hugging Face services) to anyone who visits the login page. An attacker can steal this token because the session cookie (a small file storing login information) is signed with a hardcoded secret, making it easy to decode.
CVE-2026-3071 is a vulnerability in Flair (a machine learning library) versions 0.4.1 and later that allows arbitrary code execution (running unauthorized commands on a system) when loading a malicious model file. The problem occurs because the LanguageModel class deserializes untrusted data (converts data from an external file without checking if it's safe), which can be exploited by attackers who provide specially crafted model files.
MLflow Tracking Server has a directory traversal (a flaw where an attacker uses special path characters like '../' to access files outside intended directories) vulnerability in its artifact file handler that allows unauthenticated attackers to execute arbitrary code on the server. The vulnerability exists because the server doesn't properly validate file paths before using them in operations, letting attackers run code with the permissions of the service account running MLflow.
A vulnerability called server-side request forgery (SSRF, where an attacker tricks a server into making unwanted web requests) was found in Hugging Face's smolagents version 1.24.0, specifically in the LocalPythonExecutor component's requests.get and requests.post functions. An attacker can exploit this remotely, and the vulnerability has been publicly disclosed, though the vendor did not respond when contacted.
Milvus, a vector database (a specialized storage system for AI data) used in generative AI applications, had a security flaw in versions before 2.5.27 and 2.6.10 where it exposed port 9091 by default, allowing attackers to bypass authentication (security checks that verify who you are) in two ways: through a predictable default token on a debug endpoint, and by accessing the full REST API (the interface applications use to communicate with the database) without any password or login required, potentially letting them steal or modify data.
FastGPT is an AI Agent building platform (software for creating AI systems that perform tasks) that has a security vulnerability in components like web page acquisition nodes and HTTP nodes (parts that fetch data from servers). The vulnerability allows potential security risks when these nodes make data requests from the server, but it has been addressed by adding stricter internal network address detection (checks to prevent unauthorized access to internal systems).
Qdrant (a vector similarity search engine and vector database) has a vulnerability in versions 1.9.3 through 1.15.x where an attacker with read-only access can use the /logger endpoint to append data to arbitrary files on the system by controlling the on_disk.log_file path parameter. This vulnerability allows unauthorized file manipulation with minimal privileges required.
vLLM, a system for running large language models, has a vulnerability in versions 0.8.3 through 0.14.0 where sending an invalid image to its multimodal endpoint causes it to leak a heap address (a memory location used for storing data). This information leak significantly weakens ASLR (address space layout randomization, a security feature that randomizes where programs load in memory), and attackers could potentially chain this leak with other exploits to gain remote code execution (the ability to run commands on the server).
A vulnerability in huggingface/text-generation-inference version 3.3.6 allows attackers without authentication to crash servers by sending images in requests. The problem occurs because the software downloads entire image files into memory when checking inputs for Markdown image links (a way to embed images in text), even if it will later reject the request, causing the system to run out of memory, bandwidth, or CPU power.
SQLBot is a data query system that uses a large language model and RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions) to help users query databases. Versions before 1.5.0 have a missing authentication vulnerability in a file upload endpoint that allows attackers without login credentials to upload Excel or CSV files and insert data directly into the database, because the endpoint was added to a whitelist that skips security checks.
A vulnerability in Hugging Face Transformers GLM4 allows attackers to run harmful code on a system by tricking users into opening a malicious file or visiting a malicious webpage. The problem occurs because the software doesn't properly check data when loading model weights (the numerical values that make the AI work), allowing deserialization of untrusted data (converting unsafe external files into code the system will execute).
Fix: Update MLflow to version v3.7.0 or later.
NVD/CVE DatabaseFix: Update Gradio to version 6.6.0 or later, which fixes the issue.
NVD/CVE DatabaseFix: Update to Gradio version 6.6.0 or later. Starting in version 6.6.0, the _target_url parameter is sanitized to only use the path, query, and fragment, stripping any scheme or host.
NVD/CVE DatabaseFix: Update to Gradio version 6.6.0, which fixes the issue.
NVD/CVE DatabaseFix: Update to Milvus version 2.5.27 or 2.6.10, where this vulnerability is fixed.
NVD/CVE DatabaseFix: This vulnerability is fixed in version 4.14.7. Update FastGPT to version 4.14.7 or later.
NVD/CVE DatabaseFix: Update to Qdrant version 1.16.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: This vulnerability is fixed in version 0.14.1. Update vLLM to version 0.14.1 or later.
NVD/CVE DatabaseFix: The issue is resolved in version 3.3.7.
NVD/CVE DatabaseFix: Update to version 1.5.0 or later, where the vulnerability has been fixed.
NVD/CVE Database