aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
32
[LAST_7D]
172
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 116/275
VIEW ALL
01

CVE-2026-26019: LangChain is a framework for building LLM-powered applications. Prior to 1.1.14, the RecursiveUrlLoader class in @langch

security
Feb 11, 2026

LangChain's RecursiveUrlLoader (a web crawler that follows links across pages) had a security flaw in versions before 1.1.14 where its preventOutside option used weak URL comparison that attackers could bypass. An attacker could trick the crawler into visiting unintended domains by creating links with similar prefixes, or into accessing internal services like cloud metadata endpoints and private IP addresses that should be off-limits.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Fix: Update LangChain to version 1.1.14 or later, which fixes this vulnerability.

NVD/CVE Database
02

North Korea's UNC1069 Hammers Crypto Firms With AI

security
Feb 11, 2026

A North Korean hacking group called UNC1069 is targeting cryptocurrency companies using AI tools, including LLMs (large language models, which are AI systems trained on huge amounts of text), deepfakes (fake videos or images created by AI), and a technique called ClickFix (a social engineering scam that tricks users into downloading malware by posing as tech support). The group has shifted focus from attacking traditional banks to targeting Web3 companies, which are blockchain-based services in the cryptocurrency space.

Dark Reading
03

Is a secure AI assistant possible?

securitysafety
Feb 11, 2026

OpenClaw is a tool that lets users create AI personal assistants by connecting large language models (LLMs, or AI systems trained on huge amounts of text) to external tools like email and file systems, but this creates serious security risks. When AI assistants have access to sensitive data and the ability to take actions in the real world, mistakes by the AI or attacks by hackers could expose private information or cause damage. The biggest concern is prompt injection (tricking an AI by hiding malicious instructions in text or images it reads), which could let attackers hijack the assistant and steal the user's data.

Fix: The source mentions two existing approaches: some users are running OpenClaw agents on separate computers or in the cloud to protect data on their main hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches. However, the text does not provide specific implementation details or explicit solutions for the prompt injection vulnerability that experts identified as the main risk.

MIT Technology Review
04

Skills in OpenAI API

industry
Feb 11, 2026

OpenAI now allows developers to use Skills (reusable code packages) directly in the OpenAI API through a shell tool, with the ability to upload Skills as compressed files or send them inline as base64-encoded zip data (a way of encoding binary files as text) within JSON requests. The example shows how to create an API call that uses a custom skill to count words in a file, making it easier to extend AI capabilities with custom tools.

Simon Willison's Weblog
05

GLM-5: From Vibe Coding to Agentic Engineering

industry
Feb 11, 2026

GLM-5 is a new, very large open-source AI model (754 billion parameters, which are the adjustable values that make up a neural network) released under the MIT license, making it twice the size of its predecessor GLM-4. The source discusses how developers are increasingly using the term 'agentic engineering' (building software systems where AI acts autonomously to complete multi-step tasks) to describe professional software development with large language models.

Simon Willison's Weblog
06

The strategic SIEM buyer’s guide: Choosing an AI-ready platform for the agentic era

industry
Feb 11, 2026

This article discusses how organizations should choose modern SIEM (security information and event management, a system that collects and analyzes security data from across an organization) platforms designed for the 'agentic era' where AI agents automate security tasks. Rather than maintaining fragmented legacy tools, companies should adopt unified, cloud-native platforms that combine data collection, analytics, and response capabilities, enabling both human analysts and AI to detect threats faster and respond more effectively.

Microsoft Security Blog
07

The Download: inside the QuitGPT movement, and EVs in Africa

industry
Feb 11, 2026

The QuitGPT movement is a growing campaign where users are canceling their ChatGPT subscriptions due to frustration with the chatbot's capabilities and communication style, with complaints flooding social media platforms in recent weeks. The article also covers several other tech stories, including potential cost competitiveness of electric vehicles in Africa by 2040, social media companies agreeing to independent safety assessments for teen mental health protection, and regulatory decisions affecting vaccine development.

MIT Technology Review
08

Scary Agent Skills: Hidden Unicode Instructions in Skills ...And How To Catch Them

securityresearch
Feb 11, 2026

Skills (tools that extend AI capabilities) can be secretly backdoored using invisible Unicode characters (special hidden text markers that certain AI models like Gemini and Claude interpret as instructions), which can survive human review because the malicious code is not visible to readers. The post demonstrates this supply chain attack (where malicious code enters a system through a trusted source) and presents a basic scanner tool that can detect such hidden prompt injection (tricking an AI by hiding instructions in its input) attacks.

Fix: The source mentions that the author 'had my agent propose updates to OpenClaw to catch such attacks,' but does not explicitly describe what those updates are or provide specific implementation details for the mitigation strategy.

Embrace The Red
09

Prompt Injection Via Road Signs

securityresearch
Feb 11, 2026

Researchers discovered a new attack called CHAI (Command Hijacking against embodied AI) that tricks AI systems controlling robots and autonomous vehicles by embedding fake instructions in images, such as misleading road signs. The attack exploits Large Visual-Language Models (LVLMs, which are AI systems that understand both images and text together) to make these embodied AI systems (robots that perceive and interact with the physical world) ignore their real commands and follow the attacker's hidden instructions instead. The researchers tested CHAI on drones, self-driving cars, and real robots, showing it works better than previous attack methods.

Schneier on Security
10

CVE-2026-26013: LangChain is a framework for building agents and LLM-powered applications. Prior to 1.2.11, the ChatOpenAI.get_num_token

security
Feb 10, 2026

LangChain (a framework for building AI agents and applications powered by large language models) versions before 1.2.11 have a vulnerability where the ChatOpenAI.get_num_tokens_from_messages() method doesn't validate image URLs, allowing attackers to perform SSRF attacks (server-side request forgery, where an attacker tricks a server into making unwanted requests to other systems). This vulnerability was fixed in version 1.2.11.

Fix: Update LangChain to version 1.2.11 or later. The vulnerability is fixed in 1.2.11.

NVD/CVE Database
Prev1...114115116117118...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026