Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
LibreChat version 0.8.1-rc2 has a server-side request forgery vulnerability (SSRF, where an attacker tricks a server into making requests to unintended targets) because the Actions feature allows agents to access any remote service without restrictions, including internal components like the RAG API (retrieval-augmented generation system that pulls in external documents). This means attackers could potentially use LibreChat to access internal systems they shouldn't reach.
LibreChat version 0.8.1-rc2 has an access control vulnerability where authenticated attackers (users who have logged in) can read permissions of any agent (a predefined AI assistant with specific instructions) without proper authorization, even if they shouldn't have access to that agent. If an attacker knows an agent's ID number, they can view permissions that other users have been granted for that agent.
LibreChat version 0.8.1-rc2 has a missing authorization (a failure to check if a user has permission to do something) vulnerability that allows an authenticated attacker to upload files to any agent's file storage if they know the agent's ID, even without proper permissions. This could let attackers change how agents behave by adding malicious files.
A WordPress plugin called 'Tag, Category, and Taxonomy Manager – AI Autotagger with OpenAI' has a security flaw (CWE-862, missing authorization) in versions up to 3.41.0 that allows contributors and higher-level users to add or remove taxonomy terms (tags and categories) on any post, even ones they don't own, due to missing permission checks. This vulnerability affects authenticated users who have contributor-level access or above.
Anthropic's MCP TypeScript SDK (a toolkit for building AI applications) versions up to 1.25.1 has a ReDoS vulnerability (regular expression denial of service, where a maliciously designed input causes the regex parser to work extremely hard and freeze the system) in its UriTemplate class. An attacker can send a specially crafted URI (web address) that makes the Node.js process (the JavaScript runtime environment) consume excessive CPU and stop responding, causing the application to crash or become unavailable.
A security vulnerability (CVE-2025-15453) exists in Milvus versions up to 2.6.7 in the expr.Exec function, where an attacker can manipulate the code argument to trigger deserialization (converting untrusted data back into executable code), allowing remote exploitation with user credentials. The vulnerability has been publicly disclosed and is rated as medium severity (CVSS 5.3).
Langflow, a tool for building AI-powered agents and workflows, has a security flaw in versions before 1.7.0.dev45 where some API endpoints (the interfaces that software uses to communicate and request data) are missing authentication controls (checks to verify who is using them). This allows anyone without a login to access private user conversations, transaction histories, and delete messages. The vulnerability affects endpoints that handle sensitive personal data and system operations.
MessagePack for Java has a denial-of-service vulnerability in versions before 0.9.11 where specially crafted .msgpack files can trick the library into allocating massive amounts of memory. When the library deserializes (reads and converts) these files, it blindly trusts the size information in EXT32 objects (an extension data type) and tries to allocate a byte array matching that size, which can be impossibly large, causing the Java program to run out of memory and crash.
A missing authorization vulnerability (CWE-862, a weakness where the system fails to check if a user has permission to access something) was found in the Recorp AI Content Writing Assistant plugin for WordPress, affecting versions up to 1.1.7. This flaw allows attackers to exploit incorrectly configured access control, meaning they could potentially access features or data they shouldn't be able to reach.
CVE-2025-62116 is a missing authorization vulnerability (a security flaw where the software fails to check if a user has permission to perform an action) in Quadlayers AI Copilot that affects versions up to 1.4.7. The vulnerability allows attackers to exploit incorrectly configured access control security levels, meaning they may be able to access or perform actions they shouldn't be allowed to.
LMDeploy is a toolkit for compressing, deploying, and serving large language models (LLMs). Prior to version 0.11.1, the software had an insecure deserialization vulnerability (unsafe conversion of data back into executable code) where it used torch.load() without the weights_only=True parameter when opening model checkpoint files, allowing attackers to run arbitrary code on a victim's machine by tricking them into loading a malicious .bin or .pt model file.
LangChain, a framework for building applications powered by LLMs (large language models), had a serialization injection vulnerability (a flaw where specially crafted data can be misinterpreted as legitimate code during the conversion of objects to JSON format) in its toJSON() method. The vulnerability occurred because the method failed to properly escape objects containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating malicious user data as legitimate LangChain objects when deserializing (converting back from JSON format).
LangChain, a framework for building AI agents and applications powered by large language models, had a serialization injection vulnerability (a flaw in how it converts data to stored formats) in its dumps() and dumpd() functions before versions 0.3.81 and 1.2.5. The functions failed to properly escape dictionaries containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating user-supplied data as legitimate LangChain objects during deserialization (converting stored data back into usable form).
A vulnerability in Hugging Face Transformers GLM4 allows attackers to run harmful code on a system by tricking users into opening a malicious file or visiting a malicious webpage. The problem occurs because the software doesn't properly check data when loading model weights (the numerical values that make the AI work), allowing deserialization of untrusted data (converting unsafe external files into code the system will execute).
A vulnerability in Hugging Face Transformers' X-CLIP checkpoint conversion allows attackers to execute arbitrary code (running commands they choose on a system) by tricking users into opening malicious files or visiting malicious pages. The flaw occurs because the code doesn't properly validate checkpoint data before deserializing it (converting stored data back into usable objects), which lets attackers inject malicious code that runs with the same permissions as the application.
A vulnerability in Hugging Face Transformers' HuBERT convert_config function allows attackers to execute arbitrary code (RCE, or remote code execution, where an attacker runs commands on a system) by tricking users into converting a malicious checkpoint (a saved model file). The flaw occurs because the function doesn't properly validate user input before using it to run Python code.
Hugging Face Transformers (a popular library for working with AI language models) has a vulnerability in its SEW-D convert_config function that allows attackers to run arbitrary code (any commands they want) on a victim's computer. The flaw exists because the function doesn't properly check user input before using it to execute Python code, and an attacker can exploit this by tricking a user into converting a malicious checkpoint (a saved model file).
A vulnerability in Hugging Face Transformers (a popular AI library) allows attackers to run arbitrary code on a user's computer through a malicious checkpoint (a saved model file). The flaw exists in the convert_config function, which doesn't properly validate user input before executing it as Python code, meaning an attacker can trick a user into converting a malicious checkpoint to execute code with the user's permissions.
A vulnerability in Hugging Face Transformers (a popular library for working with AI language models) allows attackers to run arbitrary code on a computer by tricking users into opening malicious files or visiting malicious websites. The flaw occurs because the software doesn't properly check data when loading saved model checkpoints (files that store a model's learned parameters), which lets attackers execute code by sending untrusted data through deserialization (the process of converting stored data back into usable objects).
A vulnerability in Hugging Face Transformers' Transformer-XL model allows attackers to run arbitrary code (remote code execution) on a victim's computer by tricking them into opening a malicious file or visiting a malicious webpage. The flaw occurs because the software doesn't properly validate data when reading model files, allowing attackers to exploit the deserialization process (converting saved data back into objects that the program can use) to inject and execute malicious code.
Fix: This issue is fixed in version 0.8.2-rc2.
NVD/CVE DatabaseFix: This issue is fixed in version 0.8.2-rc2. Users should update to this version or later.
NVD/CVE DatabaseFix: A fix is planned for the next release 2.6.8.
NVD/CVE DatabaseFix: Update to version 1.7.0.dev45 or later, which contains a patch for this vulnerability.
NVD/CVE DatabaseFix: Update to version 0.9.11 or later, which fixes the vulnerability.
NVD/CVE DatabaseFix: This issue has been patched in version 0.11.1.
NVD/CVE DatabaseFix: Update @langchain/core to version 0.3.80 or 1.1.8, and update langchain to version 0.3.37 or 1.2.3. According to the source: 'This issue has been patched in @langchain/core versions 0.3.80 and 1.1.8, and langchain versions 0.3.37 and 1.2.3.'
NVD/CVE DatabaseFix: Update to LangChain version 0.3.81 or version 1.2.5, where this issue has been patched.
NVD/CVE Database