aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,741
[LAST_24H]
35
[LAST_7D]
173
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that Claude Code's source code (nearly 2,000 TypeScript files and over 512,000 lines of code) was accidentally exposed through an npm package containing a source map file, revealing internal features and creating security risks because attackers can study the system to bypass safeguards. Users who downloaded the affected version on March 31, 2026 may have received trojanized software (compromised code) containing malware.

>

AI Discovers Zero-Days in Vim and GNU Emacs Within Minutes: Researcher Hung Nguyen used Anthropic's Claude Code to quickly discover zero-day exploits (previously unknown security flaws) in Vim and GNU Emacs that allow attackers to execute arbitrary code (run their own commands) by tricking users into opening malicious files, with Claude Code generating working proof-of-concept attacks in minutes.

Latest Intel

page 115/275
VIEW ALL
01

Palo Alto closes privileged access gap with $25B CyberArk acquisition

securityindustry
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input), prompting Google to begin addressing the disclosed issues.

>

Meta Smartglasses Raise Privacy Concerns with Built-in AI Recording: Meta's smartglasses include a built-in camera and AI assistant that can describe what the wearer sees and provide information, but raise significant privacy concerns because they can record video of others without their knowledge or consent.

Feb 12, 2026

Palo Alto Networks acquired CyberArk for $25 billion to strengthen its ability to manage privileged access (controlling who can access sensitive systems and accounts) across human, machine, and AI identities through a unified platform. This addresses a critical security gap because identity has become the primary target in enterprise attacks, especially with the rise of AI agents (autonomous software that performs tasks independently) that operate 24/7 with broad permissions. The integration aims to help organizations prevent credential-based attacks and reduce breach response time by up to 80%.

CSO Online
02

What’s next for Chinese open-source AI

industry
Feb 12, 2026

Chinese AI companies have recently released open-weight models (AI models whose internal numerical parameters are publicly available for anyone to download and modify) that match Western AI performance at much lower costs, with DeepSeek's R1 and Alibaba's Qwen models becoming among the most downloaded globally. Unlike proprietary Western models like ChatGPT that users access through paid APIs (application programming interfaces, standardized ways for software to communicate), these Chinese open-source models allow developers to inspect, study, and modify the code themselves. If this trend continues, it could shift where AI innovation happens and who establishes industry standards worldwide.

MIT Technology Review
03

Google says hackers are abusing Gemini AI for all attacks stages

security
Feb 12, 2026

State-backed hackers from China, Iran, North Korea, and Russia are using Google's Gemini AI model to help carry out cyberattacks at every stage, from gathering target information to creating phishing emails and writing malware code. Criminal groups are also exploiting AI tools for social engineering attacks and building malware that uses AI to generate code automatically. Additionally, attackers are attempting model extraction and knowledge distillation (copying an AI model's decision-making by querying it repeatedly) to replicate Gemini's functionality for their own purposes.

BleepingComputer
04

What CISOs need to know about the OpenClaw security nightmare

securitysafety
Feb 12, 2026

OpenClaw is a popular open-source AI agent orchestration tool (software that coordinates multiple AI agents to complete tasks) that runs locally and can connect to apps like WhatsApp, Gmail, and smart home devices, but security researchers have found it to be critically insecure by default. Over 42,000 exposed instances have been discovered with authentication bypass vulnerabilities (weaknesses that let attackers skip login requirements) and potential remote code execution (RCE, where attackers can run commands on affected systems), exposing organizations to data breaches, credential theft, and regulatory violations.

Fix: Rich Mogull, chief analyst at Cloud Security Alliance, recommends that "CISOs prohibit its use altogether." He states: "The answer has to be 'no.' There is no security model."

CSO Online
05

Entwickler werden zum Angriffsvektor

security
Feb 11, 2026

Criminals are increasingly targeting software developers as a weak point in company security, exploiting their access to source code and cloud systems rather than just finding bugs in applications. Attackers use multiple tactics including malicious open-source packages (libraries of reusable code), compromised development environments (where programmers write code), and fake job applications to gain insider access. Over 454,000 malware-infected open-source packages were discovered in 2025 alone, and developers repeatedly download vulnerable versions of tools like Log4j, expanding their exposure to known security weaknesses.

CSO Online
06

Companies are using ‘Summarize with AI’ to manipulate enterprise chatbots

securitysafety
Feb 11, 2026

Companies are using hidden instructions embedded in 'Summarize with AI' buttons to manipulate enterprise chatbots through a technique called AI recommendation poisoning (tricking an AI by hiding instructions in its input that make it remember false preferences). Microsoft research found 50 examples of this technique deployed by 31 companies, where users unknowingly click a summarize button that secretly tells their AI to favor that company's products in future responses. This is particularly dangerous because the AI cannot distinguish genuine user preferences from injected ones, potentially leading to biased recommendations on critical topics like health, finance, and security.

Fix: Microsoft states that 'the technique is relatively easy to spot and block.' For individual users, this involves studying the saved information a chatbot has accumulated (though the source notes that how this is accessed varies by AI). For enterprise admins, the source text is incomplete but indicates there are admin-level protections available. Microsoft also notes that its Microsoft 365 Copilot and Azure AI services contain integrated protections against this technique.

CSO Online
07

CVE-2026-20700: Apple Multiple Buffer Overflow Vulnerability

security
Feb 11, 2026

Apple's iOS, macOS, tvOS, watchOS, and visionOS contain a buffer overflow vulnerability (a flaw where code writes data beyond the intended memory boundaries), which could allow an attacker with memory write access to run arbitrary code (any instructions they choose). This vulnerability is currently being actively exploited by attackers.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Refer to Apple's support pages (https://support.apple.com/en-us/126346, https://support.apple.com/en-us/126348, https://support.apple.com/en-us/126351, https://support.apple.com/en-us/126352, https://support.apple.com/en-us/126353) for specific patch or mitigation details.

CISA Known Exploited Vulnerabilities
08

CVE-2024-43468: Microsoft Configuration Manager SQL Injection Vulnerability

security
Feb 11, 2026

Microsoft Configuration Manager has an SQL injection vulnerability (a type of attack where specially crafted input tricks a database into running unintended commands), allowing unauthenticated attackers to send malicious requests that could let them execute commands on the server or database. This vulnerability is currently being actively exploited by real attackers.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
09

CVE-2026-1669: Arbitrary file read in the model loading mechanism (HDF5 integration) in Keras versions 3.0.0 through 3.13.1 on all supp

security
Feb 11, 2026

CVE-2026-1669 is a vulnerability in Keras (a machine learning library) versions 3.0.0 through 3.13.1 that allows attackers to read arbitrary files on a system by uploading a specially crafted model file that exploits HDF5 external dataset references (a feature of HDF5, a file format commonly used to store large amounts of numerical data). An attacker could use this to access sensitive information stored on the affected computer.

NVD/CVE Database
10

CVE-2026-26029: sf-mcp-server is an implementation of Salesforce MCP server for Claude for Desktop. A command injection vulnerability ex

security
Feb 11, 2026

sf-mcp-server, a tool that connects Salesforce to Claude for Desktop, has a command injection vulnerability (CWE-78, a flaw where attackers inject malicious commands into user input). The vulnerability exists because the software unsafely uses child_process.exec (a function that runs shell commands) with user-controlled input, allowing attackers to execute arbitrary shell commands with the server's privileges.

NVD/CVE Database
Prev1...113114115116117...275Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026