aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 94/371
VIEW ALL
01

Claude Code users hitting usage limits 'way faster than expected'

securitysafety
Apr 1, 2026

Claude Code users are experiencing unexpected rapid consumption of tokens (the units of payment for using AI services), hitting their usage limits much faster than expected. Anthropic announced it is investigating the issue as a top priority, though the exact cause remains unclear. The problem may be compounded by recent peak-hour throttling (slowing service during high-demand times to manage load), which causes tokens to be consumed more quickly.

BBC Technology
02

Mutation testing for the agentic era

securityresearch
Apr 1, 2026

Code coverage metrics can be misleading because they measure whether code runs, not whether it's actually tested—a gap that mutation testing (introducing intentional bugs to check if tests catch them) can reveal. The article announces MuTON and mewt, new mutation testing tools designed for AI agents that work across multiple programming languages, addressing limitations of older regex-based tools like universalmutator that were slow and couldn't handle complex code patterns.

Trail of Bits Blog
03

Google Addresses Vertex Security Issues After Researchers Weaponize AI Agents

security
Apr 1, 2026

Palo Alto Networks revealed security problems in Google Cloud Platform's Vertex AI (Google's AI service for building and deploying machine learning models) after researchers demonstrated how to weaponize AI agents, which are autonomous programs that can perform tasks with minimal human input. Google has begun addressing these disclosed security issues.

SecurityWeek
04

Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms

securityprivacy
Apr 1, 2026

Anthropic confirmed that Claude Code's source code was accidentally leaked through an npm package (a JavaScript library repository) containing a source map file, exposing nearly 2,000 TypeScript files and over 512,000 lines of code. The leaked code revealed internal features like a self-healing memory architecture and a stealth mode for making hidden contributions to open-source projects, creating security risks because attackers can now study how the system works to bypass its safeguards. Additionally, users who downloaded the affected version between specific times on March 31, 2026 may have received a trojanized HTTP client (compromised software) containing malware.

Fix: Anthropic stated it is 'rolling out measures to prevent this from happening again.' Users who installed or updated Claude Code via npm on March 31, 2026 between 00:21 and 03:29 UTC are advised to immediately downgrade to a safe version and rotate all secrets (regenerate passwords and access keys).

The Hacker News
05

I wore Meta’s smartglasses for a month – and it left me feeling like a creep

safetyprivacy
Apr 1, 2026

Meta's smartglasses include a built-in camera and AI assistant (software that can understand and respond to user requests) that can describe what the wearer is looking at and provide information like weather forecasts. The article explores how these devices raise privacy concerns, with some people calling them problematic because they can record video of others without their knowledge or consent.

The Guardian Technology
06

Attack Surface Management – ein Kaufratgeber

securityindustry
Apr 1, 2026

This article is a buying guide for Attack Surface Management tools, which help companies find and reduce the digital resources that attackers could potentially target. The article explains that CAASM (Cyber Asset Attack Surface Management) and EASM (External Attack Surface Management) tools continuously monitor for new assets and security configuration problems, with increasing use of agentic AI (AI systems that can take independent actions) to identify and reduce risks.

CSO Online
07

datasette-enrichments-llm 0.2a0

industry
Mar 31, 2026

This is a brief announcement about datasette-enrichments-llm version 0.2a0, posted by Simon Willison on April 1st, 2026. The content primarily consists of a sponsorship pitch for a monthly email digest covering important LLM (large language model) developments, rather than discussing a specific security issue or technical problem.

Simon Willison's Weblog
08

datasette-llm-usage 0.2a0

industry
Mar 31, 2026

datasette-llm-usage version 0.2a0 removed features for tracking allowances and pricing, which moved to a separate tool called datasette-llm-accountant, and added the ability to log complete prompts, responses, and tool calls (automated functions the AI can call) to a database table if enabled through a configuration setting. The simple prompt page was redesigned and now requires specific user permissions to access.

Simon Willison's Weblog
09

datasette-llm 0.1a5

industry
Mar 31, 2026

datasette-llm 0.1a5 is a release of a plugin that lets other software tools integrate with large language models. The update improves the llm_prompt_context() plugin hook (a mechanism that other plugins can connect to), so it now tracks both individual prompts and chains of prompts executed together, including tool call loops (repeated back-and-forth exchanges between the AI and external functions).

Simon Willison's Weblog
10

Anthropic employee error exposes Claude Code source

security
Mar 31, 2026

An Anthropic employee accidentally exposed the source code for Claude Code (an AI programming tool) by leaving a source map file (.map file, a debugging file that translates minified code back to human-readable form) in a package published on npm (a registry where developers share code). This is a security risk because hackers can use source maps to understand how the code works, find vulnerabilities, and potentially steal secrets like API keys that might be hidden in the code.

Fix: According to secure coding trainer Tanya Janca, developers should: (1) disable source maps in the build/bundler tool; (2) add the .map files to the .npmignore or package.json files field to explicitly exclude them, even if generated during the build by accident; and (3) exclude them from production. Anthropic stated they are 'rolling out measures to prevent this from happening again,' though specific details are not provided in the source.

CSO Online
Prev1...9293949596...371Next