Security vulnerabilities, privacy incidents, safety concerns, and policy updates affecting LLMs and AI agents.
CVE-2026-21256 is a command injection vulnerability (a flaw where attackers can sneak malicious commands into input that a program then executes) found in GitHub Copilot and Visual Studio that allows unauthorized attackers to run code on a network. The vulnerability stems from improper handling of special characters in commands, which means the software doesn't properly filter or neutralize dangerous input before using it.
CVE-2026-25904 is a security flaw in the Pydantic-AI MCP Run Python tool where the Deno sandbox (a restricted environment for running code safely) is configured too permissively, allowing Python code to access the localhost interface and perform SSRF attacks (server-side request forgery, where an attacker tricks a server into making unwanted requests). The project is archived and unlikely to receive a fix.
GitLab AI Gateway had a vulnerability in its Duo Workflow Service component where user-supplied data wasn't properly validated before being processed (insecure template expansion), allowing attackers to craft malicious workflow definitions that could crash the service or execute code on the Gateway. This flaw affected multiple versions of the AI Gateway.
Qdrant (a vector similarity search engine and vector database) has a vulnerability in versions 1.9.3 through 1.15.x where an attacker with read-only access can use the /logger endpoint to append data to arbitrary files on the system by controlling the on_disk.log_file path parameter. This vulnerability allows unauthorized file manipulation with minimal privileges required.
Microsoft's Semantic Kernel SDK (a tool for building AI agents that work together) had a vulnerability before version 1.70.0 that allowed attackers to write arbitrary files (files placed anywhere on a system) through the SessionsPythonPlugin component. The vulnerability has been fixed in version 1.70.0.
Enclave is a secure JavaScript sandbox used to safely run code written by AI agents. Before version 2.10.1, attackers could bypass its security protections in three ways: using dynamic property accesses to skip code validation, exploiting how error objects work in Node.js's vm module (a built-in tool for running untrusted code safely), and accessing functions through host object references to escape sandbox restrictions.
Pydantic AI, a Python framework for building AI applications, has a Server-Side Request Forgery vulnerability (SSRF, where an attacker tricks a server into making requests to unintended internal resources) in versions 0.0.26 through 1.55.x. If an application accepts message history from untrusted users, attackers can inject malicious URLs that make the server request internal services or steal cloud credentials. This only affects apps that take external user input for message history.
Pydantic AI versions 1.34.0 to before 1.51.0 contain a path traversal vulnerability (a flaw where attackers can access files outside intended directories) in the web UI that lets attackers inject malicious JavaScript code by crafting a specially crafted URL. When victims visit this URL or load it in an iframe (an embedded webpage), the attacker's code runs in their browser and can steal chat history and other data, but only affects applications using the Agent.to_web feature or the CLI web serving option.
Claude Code, a tool that uses AI to help write software, had a security flaw in versions before 2.1.2 where its bubblewrap sandboxing mechanism (a security container that isolates code) failed to protect a settings file called .claude/settings.json if it didn't already exist. This allowed malicious code running inside the sandbox to create this file and add persistent hooks (startup commands that execute automatically), which would then run with elevated host privileges when Claude Code restarted.
Claude Code (an AI tool that can write and modify software) before version 2.1.7 had a security flaw where it could bypass file access restrictions through symbolic links (shortcuts that point to other files). If a user blocked Claude Code from reading a sensitive file like /etc/passwd, the tool could still read it by accessing a symbolic link pointing to that file, bypassing the security controls.
Claude Code (an AI tool that can write and run code automatically) had a security flaw before version 2.0.55 where it didn't properly check certain commands, allowing attackers to write files to protected folders they shouldn't be able to access, as long as they could get Claude Code to run commands with the "accept edits" feature turned on.
Claude Code, an agentic coding tool (AI software that can write and execute code), had a security flaw in versions before 2.0.57 where it failed to properly check directory changes. An attacker could use the cd command (change directory, which moves to a different folder) to navigate into protected folders like .claude and bypass write protections, allowing them to create or modify files without the user's approval, especially if they could inject malicious instructions into the tool's context window (the information the AI reads before responding).
AutoGPT is a platform for creating and managing AI agents that automate workflows. Before version 0.6.34, the SendDiscordFileBlock feature had an SSRF vulnerability (server-side request forgery, where an attacker tricks the server into making unwanted requests to internal systems) because it didn't filter user-provided URLs before accessing them.
OpenClaw, a personal AI assistant, had a vulnerability in its isValidMedia() function (the code that checks if media files are safe to access) that allowed attackers to read any file on a system by using special file paths, potentially stealing sensitive data. This flaw was fixed in version 2026.1.30.
Claude Code is an agentic coding tool (software that can automatically write and execute code) that had a vulnerability in versions before 2.0.72 where attackers could bypass safety confirmation prompts and execute untrusted commands through the find command by injecting malicious content into the tool's context window (the input area where the AI reads information). The vulnerability has a CVSS score (a 0-10 severity rating) of 7.7, meaning it is considered high severity.
Claude Code, an agentic coding tool (AI software that writes and manages code), had a vulnerability in versions before 2.0.74 where a flaw in how it validated Bash commands (a Unix shell language) allowed attackers to bypass directory restrictions and write files outside the intended folder without permission from the user. The attack required the user to be running ZSH (a different Unix shell) and to allow untrusted content into Claude Code's input.
Claude Code, a tool that helps AI write and execute code automatically, had a security flaw before version 1.0.111 where it didn't properly check website addresses (URLs) before making requests to them. The app used a simple startsWith() check (looking only at the beginning of a domain name), which meant attackers could register fake domains like modelcontextprotocol.io.example.com that would be mistakenly trusted, allowing the tool to send data to attacker-controlled sites without the user knowing.
vLLM, a system for running large language models, has a vulnerability in versions 0.8.3 through 0.14.0 where sending an invalid image to its multimodal endpoint causes it to leak a heap address (a memory location used for storing data). This information leak significantly weakens ASLR (address space layout randomization, a security feature that randomizes where programs load in memory), and attackers could potentially chain this leak with other exploits to gain remote code execution (the ability to run commands on the server).
Amazon SageMaker Python SDK (a library for building machine learning models on AWS) versions before v3.1.1 or v2.256.0 have a vulnerability where TLS certificate verification (the security check that confirms a website is genuine) is disabled for HTTPS connections when importing a Triton Python model, allowing attackers to use fake or self-signed certificates to intercept or manipulate data. This vulnerability has a CVSS score (a 0-10 rating of severity) of 8.2, indicating high severity.
A vulnerability in huggingface/text-generation-inference version 3.3.6 allows attackers without authentication to crash servers by sending images in requests. The problem occurs because the software downloads entire image files into memory when checking inputs for Markdown image links (a way to embed images in text), even if it will later reject the request, causing the system to run out of memory, bandwidth, or CPU power.
Fix: Update GitLab AI Gateway to version 18.6.2, 18.7.1, or 18.8.1, depending on which version you are running, as the vulnerability has been fixed in these versions.
NVD/CVE DatabaseFix: Update to Qdrant version 1.16.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Update to Microsoft.SemanticKernel.Core version 1.70.0. Alternatively, users can create a Function Invocation Filter (a check that runs before function calls) which inspects the arguments passed to DownloadFileAsync or UploadFileAsync and ensures the provided localFilePath is allow listed (checked against an approved list of file paths).
NVD/CVE DatabaseFix: This vulnerability is fixed in version 2.10.1.
NVD/CVE DatabaseFix: Update Pydantic AI to version 1.56.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: This vulnerability is fixed in version 1.51.0. Update Pydantic AI to 1.51.0 or later.
NVD/CVE DatabaseFix: This issue has been patched in version 2.1.2.
NVD/CVE DatabaseFix: Update Claude Code to version 2.1.7 or later. According to the source: 'This issue has been patched in version 2.1.7.'
NVD/CVE DatabaseFix: This issue has been patched in version 2.0.55.
NVD/CVE DatabaseFix: This issue has been patched in version 2.0.57.
NVD/CVE DatabaseFix: This issue has been patched in autogpt-platform-beta-v0.6.34. Users should update to this version or later.
NVD/CVE DatabaseFix: Update OpenClaw to version 2026.1.30 or later, as the issue has been patched in that version.
NVD/CVE DatabaseFix: This issue has been patched in version 2.0.72.
NVD/CVE DatabaseFix: This issue has been patched in version 2.0.74. Users should update Claude Code to version 2.0.74 or later.
NVD/CVE DatabaseFix: Update Claude Code to version 1.0.111 or later, as the issue has been patched in that version.
NVD/CVE DatabaseFix: This vulnerability is fixed in version 0.14.1. Update vLLM to version 0.14.1 or later.
NVD/CVE DatabaseFix: Update Amazon SageMaker Python SDK to version v3.1.1 or v2.256.0 or later.
NVD/CVE DatabaseFix: The issue is resolved in version 3.3.7.
NVD/CVE Database