aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
32
[LAST_7D]
172
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 117/275
VIEW ALL
01

v0.14.14

security
Feb 10, 2026

LlamaIndex version 0.14.14 is a maintenance release that fixes multiple bugs across core components and integrations, including issues with error handling in vector store queries, compatibility with deprecated Python functions, and empty responses from language models. The release also adds new features like a TokenBudgetHandler for cost governance and improves security defaults in core components. Several integrations with external services (OpenAI, Google Gemini, Anthropic, Bedrock) were updated to support new models and fix compatibility issues.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Fix: Users should update to version 0.14.14. The release notes explicitly mention: "Fix potential crashes and improve security defaults in core components (#20610)" and include specific bug fixes such as "fix(agent): handle empty LLM responses with retry logic" (#20596) and "Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated" (#20517).

LlamaIndex Security Releases
02

CVE-2026-26003: FastGPT is an AI Agent building platform. From 4.14.0 to 4.14.5, attackers can directly access the plugin system through

security
Feb 10, 2026

FastGPT (an AI platform for building AI agents) versions 4.14.0 to 4.14.5 have a vulnerability where attackers can access the plugin system without authentication by directly calling certain API endpoints, potentially crashing the plugin system and causing users to lose their plugin installation data, though not exposing sensitive keys. This vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 6.9, which is considered medium severity.

Fix: This vulnerability is fixed in version 4.14.5-fix. Users should upgrade to this patched version.

NVD/CVE Database
03

CVE-2026-21523: Time-of-check time-of-use (toctou) race condition in GitHub Copilot and Visual Studio allows an authorized attacker to e

security
Feb 10, 2026

CVE-2026-21523 is a time-of-check time-of-use (TOCTOU) race condition (a vulnerability where an attacker exploits the gap between when a system checks permissions and when it uses a resource) in GitHub Copilot and Visual Studio that allows an authorized attacker to execute code over a network. The vulnerability has not yet received a CVSS severity rating from NIST.

NVD/CVE Database
04

CVE-2026-21518: Improper neutralization of special elements used in a command ('command injection') in GitHub Copilot and Visual Studio

security
Feb 10, 2026

CVE-2026-21518 is a command injection vulnerability (a flaw where attackers can insert malicious commands into user input) in GitHub Copilot and Visual Studio Code that allows an unauthorized attacker to bypass security features over a network. The vulnerability stems from improper handling of special characters in commands. No CVSS severity score (a 0-10 rating of how serious a vulnerability is) has been assigned yet by NIST.

NVD/CVE Database
05

CVE-2026-21516: Improper neutralization of special elements used in a command ('command injection') in Github Copilot allows an unauthor

security
Feb 10, 2026

GitHub Copilot contains a command injection vulnerability (CVE-2026-21516), which is a flaw where special characters in user input are not properly filtered, allowing an attacker to execute code remotely on a system. The vulnerability was reported by Microsoft Corporation and has a CVSS score pending assessment.

NVD/CVE Database
06

CVE-2026-21257: Improper neutralization of special elements used in a command ('command injection') in GitHub Copilot and Visual Studio

security
Feb 10, 2026

CVE-2026-21257 is a command injection vulnerability (a flaw where attackers can insert malicious commands into an application) found in GitHub Copilot and Visual Studio that allows an authorized attacker to gain elevated privileges over a network. The vulnerability stems from improper handling of special characters in commands. As of the source date, a CVSS severity score (a 0-10 rating of how severe a vulnerability is) had not yet been assigned by NIST.

NVD/CVE Database
07

CVE-2026-21256: Improper neutralization of special elements used in a command ('command injection') in GitHub Copilot and Visual Studio

security
Feb 10, 2026

CVE-2026-21256 is a command injection vulnerability (a flaw where attackers can sneak malicious commands into input that a program then executes) found in GitHub Copilot and Visual Studio that allows unauthorized attackers to run code on a network. The vulnerability stems from improper handling of special characters in commands, which means the software doesn't properly filter or neutralize dangerous input before using it.

NVD/CVE Database
08

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

industry
Feb 10, 2026

QuitGPT is a campaign urging people to cancel their ChatGPT Plus subscriptions, citing concerns about OpenAI president Greg Brockman's donation to a political super PAC and the use of ChatGPT-4 by US Immigration and Customs Enforcement for résumé screening. The campaign, which began in late January and has garnered over 36 million Instagram views, asks supporters to either cancel their subscriptions, commit to stop using ChatGPT, or share the campaign on social media, with organizers hoping that enough canceled subscriptions will pressure OpenAI to change its practices.

MIT Technology Review
09

80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier

securitypolicy
Feb 10, 2026

Most Fortune 500 companies now use AI agents (software that can act and make decisions with minimal human input), but many lack visibility into how many agents are running and what data they access, creating security risks. The report recommends applying Zero Trust security principles (requiring strong identity verification and giving users/agents only the minimum access they need) to AI agents the same way organizations do for human employees.

Microsoft Security Blog
10

langchain==1.2.10

security
Feb 10, 2026

LangChain released version 1.2.10, which includes a bug fix for token counting on partial message sequences (a partial message sequence is a subset of messages in a conversation), dependency updates, and code refactoring to rename internal variables.

LangChain Security Releases
Prev1...115116117118119...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026