aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,736
[LAST_24H]
39
[LAST_7D]
178
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that nearly 2,000 TypeScript files (over 512,000 lines of code) from Claude Code were accidentally exposed through a JavaScript package repository, revealing internal features and allowing attackers to study how to bypass safeguards. Users who downloaded the affected package during a specific window on March 31, 2026 may have also received malware-infected software.

>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input) on Google Cloud's Vertex AI platform, prompting Google to begin addressing the disclosed security problems.

>

Latest Intel

page 108/274
VIEW ALL
01

Running AI models is turning into a memory game

industry
Feb 17, 2026

AI companies are facing a major challenge managing memory (the high-speed storage that holds data a computer needs right now) as they scale up their systems, with DRAM chip prices jumping 7x in the past year. Companies are adopting strategies like prompt caching (temporarily storing input data to reuse it cheaply) to reduce costs, but optimizing memory usage involves complex tradeoffs, such as deciding how long to keep data cached and managing what gets removed when new data arrives. The companies that master memory orchestration (coordinating how data moves through different storage systems) will be able to run queries more efficiently and gain a competitive advantage.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Meta Smartglasses Raise Privacy Concerns with Covert Recording: Meta's smartglasses feature a built-in camera and AI assistant that can describe surroundings and answer questions, but raise significant privacy issues because they can record video of others without knowledge or consent.

TechCrunch
02

GHSA-hv93-r4j3-q65f: OpenClaw Hook Session Key Override Enables Targeted Cross-Session Routing

security
Feb 17, 2026

OpenClaw had a vulnerability where its hook endpoint (`POST /hooks/agent`) accepted session keys (identifiers for conversation contexts) directly from user requests, allowing someone with a valid hook token to inject messages into any session they could guess or derive. This could poison conversations with malicious prompts that persist across multiple turns. The vulnerability affected versions 2.0.0-beta3 through 2026.2.11.

Fix: Update to OpenClaw version 2026.2.12 or later. The fix includes: rejecting the `sessionKey` parameter by default unless explicitly enabled with `hooks.allowRequestSessionKey=true`, adding a `hooks.defaultSessionKey` option for fixed routing, and adding `hooks.allowedSessionKeyPrefixes` to restrict which session keys can be used. The recommended secure configuration disables `allowRequestSessionKey`, sets `defaultSessionKey` to "hook:ingress", and restricts prefixes to ["hook:"].

GitHub Advisory Database
03

WordPress.com adds an AI Assistant that can edit, adjust styles, create images, and more

industry
Feb 17, 2026

WordPress.com has added a built-in AI assistant that helps website owners make changes to their sites using natural language commands (instructions written in plain English rather than technical code). The assistant can modify layouts and styles, create or edit images using Google's Gemini AI models, rewrite content, and provide editing suggestions, though it only works with block themes (a modern WordPress design system) and is opt-in unless you use WordPress.com's AI website builder.

TechCrunch
04

Alibaba unveils Qwen3.5 as China’s chatbot race shifts to AI agents

industry
Feb 17, 2026

Alibaba has released Qwen3.5, a new AI model series that comes in both an open-weight version (downloadable and runnable on users' own computers) and a hosted version (running on Alibaba's servers), featuring improved performance, multimodal capabilities (ability to understand text, images, and video together), and support for AI agents (systems that can independently complete multi-step tasks with minimal human supervision). The release reflects intensifying competition in China's AI market, as multiple Chinese companies are racing to develop agent capabilities similar to those recently released by American AI companies like Anthropic and OpenAI.

CNBC Technology
05

As AI jitters rattle IT stocks, Infosys partners with Anthropic to build ‘enterprise-grade’ AI agents

industry
Feb 17, 2026

Infosys, a major Indian IT services company, has partnered with Anthropic to build AI agents (autonomous systems that can independently handle complex tasks) using Anthropic's Claude models integrated into Infosys's Topaz AI platform. These agents are designed to automate workflows in industries like banking and manufacturing, though the partnership comes amid concerns that AI tools will disrupt India's labor-intensive IT services sector. Infosys is already using Anthropic's Claude Code tool internally to write and test code, with AI services currently generating about $275 million in quarterly revenue for the company.

TechCrunch
06

SmartLoader Attack Uses Trojanized Oura MCP Server to Deploy StealC Infostealer

security
Feb 17, 2026

Cybersecurity researchers discovered a SmartLoader campaign where attackers created fake GitHub accounts and a trojanized Model Context Protocol server (a tool that connects AI assistants to external data and services) posing as an Oura Health tool to distribute StealC infostealer malware. The attackers spent months building credibility by creating fake contributors and repositories before submitting the malicious server to legitimate registries, targeting developers whose systems contain valuable data like API keys and cryptocurrency wallet credentials.

Fix: Organizations are recommended to inventory installed MCP servers, establish a formal security review before installation, verify the origin of MCP servers, and monitor for suspicious egress traffic and persistence mechanisms.

The Hacker News
07

Side-Channel Attacks Against LLMs

securityresearch
Feb 17, 2026

These three research papers describe side-channel attacks (exploiting indirect information leaks like timing or packet sizes rather than breaking encryption directly) against large language models. Attackers can monitor encrypted network traffic and infer sensitive information about user conversations, such as the topic of messages, specific queries, or even personal data, by analyzing patterns in response times, packet sizes, or token counts from the model's inference process.

Fix: The source text proposes several mitigations but notes that none provides complete protection. Specific defenses mentioned include: random padding (adding fake data to obscure patterns), token batching (grouping tokens together before sending), packet injection (inserting extra packets), and iteration-wise token aggregation (combining token counts across processing steps). The papers also note that responsible disclosure and collaboration with LLM providers has led to initial countermeasures being implemented, though the authors conclude that providers need to do more work to fully address these vulnerabilities.

Schneier on Security
08

Could Bill Gates and political tussles overshadow AI safety debate in Delhi?

policyindustry
Feb 17, 2026

The AI Impact Summit in India this week brings together tech leaders, politicians, and scientists to discuss how to guide AI development globally, but the event risks being overshadowed by political tensions and competing interests between Western powers and the Global South. India faces significant challenges in AI adoption, including that major AI chatbots like ChatGPT and Claude don't support most of India's languages, and AI data workers there earn less than £4,000 per year while Western AI companies are valued in the hundreds of billions, creating inequality in how AI benefits are distributed worldwide.

BBC Technology
09

Ireland now also investigating X over Grok-made sexual images

safetypolicy
Feb 17, 2026

Ireland's Data Protection Commission has launched a formal investigation into X for using its Grok AI tool to generate non-consensual sexual images of real people, including children, and will examine whether the company violated GDPR (General Data Protection Regulation, EU rules protecting personal data) requirements. This investigation joins similar probes by UK and other authorities, with potential fines up to 4% of X's global revenue across all EU member states. The investigation focuses on whether X properly assessed risks and followed data protection principles before deploying Grok.

BleepingComputer
10

With CISOs stretched thin, re-envisioning enterprise risk may be the only fix

policyindustry
Feb 17, 2026

CISOs (chief information security officers, the top security executives at companies) report that their roles have become unmanageable because companies keep adding responsibilities without giving them more staff or budget. A survey found that 52% of CISOs say their scope is no longer fully manageable, and they now oversee everything from traditional security tasks to AI governance, third-party risk management, and disaster recovery, often with the same teams they had five years ago.

Fix: According to cybersecurity consultant Brian Levine, the solution requires redesigning the role by distributing responsibility across multiple people and giving CISOs the authority to match their accountability. Levine states: 'The solution isn't to find superhuman CISOs. It's to redesign the role, distribute responsibility, and give them the authority to match the accountability. Until boards rebalance that equation, CISOs will continue to feel like they're set up to fail.'

CSO Online
Prev1...106107108109110...274Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026