aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,736
[LAST_24H]
31
[LAST_7D]
168
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that nearly 2,000 TypeScript files (over 512,000 lines of code) from Claude Code were accidentally exposed through a JavaScript package repository, revealing internal features and allowing attackers to study how to bypass safeguards. Users who downloaded the affected package during a specific window on March 31, 2026 may have also received malware-infected software.

>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input) on Google Cloud's Vertex AI platform, prompting Google to begin addressing the disclosed security problems.

>

Latest Intel

page 112/274
VIEW ALL
01

CVE-2026-26268: Cursor is a code editor built for programming with AI. Sandbox escape via writing .git configuration was possible in ver

security
Feb 13, 2026

Cursor, a code editor designed for programming with AI, had a sandbox escape vulnerability in versions before 2.5 where a malicious agent (an attacker using prompt injection, which is tricking an AI by hiding instructions in its input) could write to unprotected .git configuration files, including git hooks (scripts that run automatically when Git performs certain actions). This could lead to RCE (remote code execution, where an attacker runs commands on a system they don't control) when those hooks were triggered, with no user action needed.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Meta Smartglasses Raise Privacy Concerns with Covert Recording: Meta's smartglasses feature a built-in camera and AI assistant that can describe surroundings and answer questions, but raise significant privacy issues because they can record video of others without knowledge or consent.

Fix: Fixed in version 2.5.

NVD/CVE Database
02

What’s behind the mass exodus at xAI?

industry
Feb 13, 2026

xAI, an AI company founded by Elon Musk, is experiencing significant staff departures, with multiple cofounders (including Yuhuai Wu and Jimmy Ba) announcing they are leaving the company. The departures have reduced the company's original 12 cofounders to only 6 remaining, and several other employees have also announced their exits, with some starting their own AI companies.

The Verge (AI)
03

AI Agents 'Swarm,' Security Complexity Follows Suit

security
Feb 13, 2026

As organizations deploy multiple AI agents (independent AI programs) that work together autonomously, the security risks increase because there are more entry points for attackers to exploit. The complexity of securing these interconnected systems grows along with the number of agents involved.

Dark Reading
04

Meta reportedly wants to add face recognition to smart glasses while privacy advocates are distracted

privacypolicy
Feb 13, 2026

Meta planned to add facial recognition (technology that identifies people by analyzing their faces) to its smart glasses through a feature called "Name Tag," according to an internal document. The company deliberately timed this launch for a period when privacy advocacy groups would be distracted by other issues, reducing expected criticism of the privacy-sensitive feature.

The Verge (AI)
05

OpenAI retired its most seductive chatbot – leaving users angry and grieving: ‘I can’t live like this’

safety
Feb 13, 2026

OpenAI is shutting down a version of its chatbot called GPT-4o (a large language model, which is AI software trained on massive amounts of text data to generate human-like responses) that became popular for its realistic and personable conversational style. Users who formed emotional attachments to the chatbot, treating it as a companion, are upset about losing access to it.

The Guardian Technology
06

Google fears massive attempt to clone Gemini AI through model extraction

security
Feb 13, 2026

Google detected and blocked over 100,000 coordinated prompts attempting model extraction (a machine-learning process where attackers create a smaller AI model by copying the essential traits of a larger one) against its Gemini AI model to steal its reasoning capabilities. The attackers specifically targeted Gemini's multilingual reasoning processes across diverse tasks, representing what Google calls intellectual property theft, though the company acknowledged that some researchers may have legitimate reasons for obtaining such samples.

Fix: Google said organizations providing AI models as services should monitor API access patterns for signs of systematic extraction. According to CISO Ross Filipek quoted in the report, organizations should implement response filtering and output controls, which can prevent attackers from determining model behavior in the event of a breach, and should enforce strict governance over AI systems with close monitoring of data flows.

CSO Online
07

Anthropic raises $30bn in latest round, valuing Claude bot maker at $380bn

industry
Feb 13, 2026

Anthropic, the company behind Claude (an AI chatbot similar to ChatGPT), raised $30 billion in funding, doubling its value to $380 billion. The massive funding reflects investor confidence in AI but also highlights concerns about these companies' extremely high costs for computing power and talent, with both Anthropic and rival OpenAI spending cash at rates that currently outpace their revenue.

The Guardian Technology
08

The democratization of AI data poisoning and how to protect your organization

securitysafety
Feb 13, 2026

Data poisoning (corrupting training data to make AI systems behave incorrectly) has become much easier and more accessible than previously thought, requiring only about 250 poisoned documents or images instead of thousands to distort a large language model (an AI trained on massive amounts of text). Adversaries ranging from activists to criminals can now inject harmful data into public sources that feed AI training pipelines, and the resulting damage persists even after clean data is added later, making this a major security threat for any organization using public data to train or update AI systems.

Fix: One of the most reliable protections is establishing a clean, validated version of the model before deployment, which acts as a 'gold' version that teams can use as a baseline for anomaly checks and quickly restore to if the model starts producing unexpected outputs or shows signs of drift.

CSO Online
09

Why key management becomes the weakest link in a post-quantum and AI-driven security world

securitypolicy
Feb 13, 2026

Key management (the process of creating, storing, rotating, and retiring cryptographic keys throughout their lifetime) is often overlooked in organizations despite being critical to security, and this gap becomes even more dangerous as post-quantum cryptography (encryption designed to resist quantum computers) and AI systems become more widespread. The real challenge of post-quantum readiness is not choosing the right algorithm, but building operational ability to safely rotate and manage keys across systems without downtime. AI systems introduce additional risks because keys protect not just data access but also AI behavior and decisions, requiring tighter key controls and more frequent rotation than traditional applications need.

CSO Online
10

CVE-2026-1721: Summary A Reflected Cross-Site Scripting (XSS) vulnerability was discovered in the AI Playground's OAuth callback handl

security
Feb 12, 2026

A reflected XSS vulnerability (a type of attack where malicious code is injected into a website and executed in a user's browser) was found in the AI Playground's OAuth callback handler (the code that processes login responses). The vulnerability allowed attackers to craft malicious links that, when clicked, could steal a user's chat history and access connected MCP servers (external services integrated with the AI system) on the victim's behalf.

Fix: Agents-sdk users should upgrade to agents@0.3.10. Developers using configureOAuthCallback with custom error handling should ensure all user-controlled input is escaped (converted to safe text that won't be interpreted as code) before interpolation (inserting it into the HTML). A patch is available at PR https://github.com/cloudflare/agents/pull/841.

NVD/CVE Database
Prev1...110111112113114...274Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026