Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
The Azure authentication extension in OpenTelemetry Collector has a critical flaw where it compares bearer tokens (credentials that prove you are who you claim to be) as plain text strings instead of validating them as JWTs (JSON Web Tokens, a standard secure token format). This allows attackers who obtain a valid Azure token to reuse it indefinitely by setting the correct Host header, bypassing authentication entirely.
A critical authentication bypass vulnerability in the `fast-jwt` library allows attackers to forge valid JSON Web Tokens (JWTs, a standard format for securely transmitting user information) when an asynchronous key resolver function returns an empty string. The library incorrectly accepts an empty HMAC (a cryptographic signature method) secret and allows attackers to compute valid signatures with the empty key, bypassing authentication entirely on versions up to 6.2.3.
PraisonAI contains an unauthenticated remote code execution (RCE, where an attacker can run arbitrary commands on a server) vulnerability in the `tool_override.py` file that was missed during a previous security patch (CVE-2026-40287). An attacker can trigger this by sending a POST request to `/v1/recipes/run` with a malicious recipe, causing the server to execute a `tools.py` file without any authentication or security checks. The vulnerability affects version 4.6.31 and other recent versions.
In vLLM versions 0.18.0 through 0.19.1, a bug in the `extract_hidden_states` speculative decoding proposer (a component that predicts tokens ahead of time to speed up AI inference) causes the server to crash when any request includes sampling penalty parameters like `repetition_penalty`. The crash happens because the proposer returns a tensor (multi-dimensional array) with the wrong shape after the first step, causing a shape mismatch error when penalties are applied.
vLLM (a system for running large language models) has a vulnerability where specially crafted text prompts containing multimodal placeholder tokens (sequences that represent images or videos) without actual image or video data cause the system to crash with an IndexError (a programming error when accessing data that doesn't exist). An unauthenticated attacker can send a single malicious request to a vLLM server to trigger a denial of service attack (making the service unavailable), affecting any deployment that runs vision-capable language models.
The `discover_pipeline_files()` function in ciguard (a tool used by AI agents to scan code repositories) followed symlinks (shortcuts that point to other directories) without proper restrictions, allowing an attacker to trick it into reading sensitive files outside the intended scan directory. An AI agent scanning a malicious folder with planted symlinks could accidentally expose secrets from system directories like ~/.aws/ or /etc/.
The OpAMP client (a component for managing telemetry agents) reads HTTP responses without limiting how much data it accepts, which could allow an attacker controlling the server to send extremely large responses and exhaust the application's memory, causing it to crash. This vulnerability only affects applications where the OpAMP server is untrusted or could be intercepted by a network attacker.
SQLBot is a Text-to-SQL system (software that converts natural language questions into SQL database queries) that uses large language models and RAG (retrieval-augmented generation, where the AI pulls in external data to help answer questions). In versions 1.7.0 and earlier, it has a prompt injection vulnerability (where an attacker hides malicious instructions in their input to trick the AI), because user questions are directly inserted into the AI prompt without filtering, and the resulting SQL commands are executed without checking if they're safe. An attacker with access can craft a malicious question to make the system run harmful SQL commands, potentially allowing remote code execution (the ability to run commands on a system they don't own) when using PostgreSQL.
The Network-AI project has a critical vulnerability where its MCP HTTP endpoint (a server that handles tool requests) accepts requests without any authentication checks, and binds to 0.0.0.0 (making it accessible from any network). This allows anyone who can reach the server to call privileged tools that can read and modify the system's configuration, control agents, create security tokens, and adjust budget limits.
A vulnerability was found in Langchain-Chatchat (a chatbot framework) up to version 0.3.1.3 in the file upload handler component. The vulnerability involves insufficiently random values (meaning the system doesn't generate unpredictable numbers properly), which could be exploited by someone on the same local network, though the attack is difficult to carry out.
A vulnerability (CVE-2026-7846) exists in Langchain-Chatchat versions up to 0.3.1.3 in the OpenAI-Compatible File Upload API. The flaw involves a time-of-check time-of-use bug (a race condition where a file is checked for safety, then modified before it's actually used), triggered by manipulating the file.filename argument, though it requires local network access and is difficult to exploit.
A vulnerability (CVE-2026-7845) was discovered in Langchain-Chatchat version 0.3.1.3 and earlier, affecting a function that handles pasting images in the chat interface. An attacker on the same local network could exploit this flaw by manipulating image data to cause weak cryptographic hashing (weak hash, a security measure that's easy to break), though the attack is difficult to execute and requires significant technical skill.
A vulnerability in Langchain-Chatchat (a chatbot framework) up to version 0.3.1.3 allows attackers on the same local network to access file operations without authentication (missing authentication, meaning no login check). The vulnerability affects file-related functions like listing, retrieving, and deleting files, and the exploit code is now publicly available.
The GeekyBot WordPress plugin (up to version 1.2.0) has a SQL injection vulnerability (a type of attack where hackers insert malicious database commands into user input) in the 'attributekey' parameter. Because the plugin doesn't properly clean user input or secure its database queries, unauthenticated attackers can add extra SQL commands to extract sensitive data from the site's database.
Between February and April 2026, the ogham-mcp package accidentally published 22 versions on PyPI (the Python package repository) with embedded credentials, including database passwords for Neon postgres (a database service) and a Voyage AI API key (a token that grants access to an AI service). No evidence of actual misuse was found, and all credentials have been rotated by the maintainers.
Titra, an open source time tracking application, has a vulnerability in version 0.99.52 where the globalsettings Meteor publication (a feature that broadcasts data to connected users) exposes sensitive configuration information like API keys without checking if the user has admin permissions. Any authenticated user (someone logged into the system) can access these secrets through DDP (the protocol Meteor uses to send data to clients).
Apache OpenNLP has a vulnerability where three methods in AbstractModelReader read count values from binary model files without checking if they're reasonable, allowing an attacker to trigger an OOM error (a crash caused by the program running out of memory) by creating a malicious .bin file with an extremely large count value. This denial of service (making a service unavailable) attack requires minimal file size and crashes the Java virtual machine early during model loading.
Evolver, a self-evolving engine for AI agents, had a prototype pollution vulnerability (a bug where attackers inject malicious properties into core JavaScript objects) in versions before 1.69.3. The flaw existed in functions that merged user data without blocking dangerous keys like __proto__ and constructor, allowing attackers to modify how all JavaScript objects behave.
Evolver, a tool that helps AI agents improve themselves, had a command injection vulnerability (a security flaw where attackers trick the system into running unauthorized commands) in versions before 1.69.3. The flaw was in the _extractLLM() function, which built shell commands using simple string concatenation without cleaning the input first, allowing attackers to execute arbitrary commands on the server when certain input contained shell metacharacters (special characters that have meaning to the command system).
Evolver, a GEP-powered self-evolving engine for AI agents, contained a path traversal vulnerability (a type of attack where an attacker manipulates file paths to access files outside their intended directory) in versions before 1.69.3. The vulnerability was in the skill download command's --out= flag, which did not validate user-provided file paths, allowing attackers to write files to any location on the system, potentially overwriting critical files.
Fix: The source text does not provide an explicit patch version, code fix, or mitigation strategy. N/A -- no mitigation discussed in source.
GitHub Advisory DatabaseFix: Fixed in vLLM v0.20.0 (PR #38610) by slicing the return value to `sampled_token_ids[:, :1]` to ensure the correct shape. If upgrading is not possible, either avoid using `extract_hidden_states` as the speculative decoding method, or strip penalty parameters (`repetition_penalty`, `frequency_penalty`, `presence_penalty`) from incoming requests at an API gateway before they reach vLLM.
GitHub Advisory DatabaseFix: Fixed in v0.8.2 and v0.8.3. The patch adds a new `follow_symlinks: bool = False` parameter to `discover_pipeline_files()` that refuses to descend into symlinked directories or files by default. Additionally, all results are filtered to verify their resolved paths lie under the requested root directory, even if callers enable symlink following.
GitHub Advisory DatabaseFix: Update to the patched version: pull request #4116 updates the OpAMP client HTTP transport to limit the maximum size of responses to 128KB, preventing unbounded memory consumption.
GitHub Advisory DatabaseFix: This issue has been fixed in version 1.7.1.
NVD/CVE DatabaseFix: Upgrade to v0.11.1 immediately by running: pip install --upgrade "ogham-mcp>=0.11.1". This version removes the leaked credentials and adds automated scanning to prevent future credential leaks. Users do not need to rotate credentials on their own end, as the exposed credentials belonged to the project maintainers, not to users.
GitHub Advisory DatabaseFix: 2.x users should upgrade to 2.5.9. 3.x users should upgrade to 3.0.0-M3. The fix adds an upper bound check (default 10,000,000) on the three count fields before array allocation; values that are negative or exceed the bound throw an IllegalArgumentException and fail safely. Users who cannot upgrade immediately should treat all .bin model files as untrusted input unless their origin is verified, and avoid loading models from end users or third-party repositories without integrity checks. Deployments needing higher limits can set the OPENNLP_MAX_ENTRIES system property at JVM startup (e.g., -DOPENNLP_MAX_ENTRIES=50000000).
NVD/CVE DatabaseFix: Update to version 1.69.3, where this issue has been patched.
NVD/CVE DatabaseFix: This issue has been patched in version 1.69.3. Users should upgrade to version 1.69.3 or later.
NVD/CVE DatabaseFix: This issue has been patched in version 1.69.3. Users should upgrade to version 1.69.3 or later.
NVD/CVE Database