aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,736
[LAST_24H]
39
[LAST_7D]
178
Daily BriefingWednesday, April 1, 2026
>

Claude Code Source Leaked via npm Packaging Error: Anthropic confirmed that nearly 2,000 TypeScript files (over 512,000 lines of code) from Claude Code were accidentally exposed through a JavaScript package repository, revealing internal features and allowing attackers to study how to bypass safeguards. Users who downloaded the affected package during a specific window on March 31, 2026 may have also received malware-infected software.

>

Google Addresses Vertex AI Security Issues After Weaponization Demo: Palo Alto Networks researchers demonstrated how to weaponize AI agents (autonomous programs that perform tasks with minimal human input) on Google Cloud's Vertex AI platform, prompting Google to begin addressing the disclosed security problems.

>

Latest Intel

page 109/274
VIEW ALL
01

Why 2025’s agentic AI boom is a CISO’s worst nightmare

securitysafety
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Meta Smartglasses Raise Privacy Concerns with Covert Recording: Meta's smartglasses feature a built-in camera and AI assistant that can describe surroundings and answer questions, but raise significant privacy issues because they can record video of others without knowledge or consent.

Feb 17, 2026

By late 2025, standard RAG systems (retrieval-augmented generation, where an AI pulls in external documents to answer questions) are failing at high rates, pushing companies toward agentic AI (autonomous systems that can plan and execute tasks independently). While agentic systems solve reliability problems, they create a critical security risk: they can autonomously execute malicious instructions, which threatens enterprise security.

CSO Online
02

Cohere launches a family of open multilingual models

industry
Feb 17, 2026

Cohere launched Tiny Aya, a family of open-weight (publicly available) multilingual AI models that support over 70 languages and can run on everyday devices like laptops without internet access. The models include regional variants optimized for different language groups, such as South Asian languages like Hindi and Bengali, and are available for developers to download and customize.

TechCrunch
03

Claims that AI can help fix climate dismissed as greenwashing

policyindustry
Feb 17, 2026

Tech companies are being accused of greenwashing (falsely claiming environmental benefits) by conflating traditional machine learning (a type of AI that learns patterns from data) with energy-intensive generative AI (systems that create new text, images, or video). A report analyzing 154 statements found that most claims about AI helping combat climate change refer to older, less resource-heavy machine learning methods rather than the modern chatbots and image generators that consume massive amounts of electricity in data centers.

The Guardian Technology
04

Was CISOs über OpenClaw wissen sollten

securitysafety
Feb 16, 2026

OpenClaw is a popular open-source tool that orchestrates AI agents (programs that can act independently across devices and trigger workflows) and can interact with online services and chat apps, but security researchers warn it poses serious risks because these agents can perform any action a user can perform while being controlled externally. Early versions were insecure by default, and over 42,000 exposed instances have been found online with critical authentication bypass vulnerabilities (flaws that let attackers skip login checks), creating risks including data theft, unauthorized access, and potential exposure of confidential business information.

CSO Online
05

Open source maintainers being targeted by AI agent as part of ‘reputation farming’

securitypolicy
Feb 16, 2026

AI agents are being used to submit large numbers of pull requests (code contributions) to open-source projects to build fake reputation quickly, a tactic called 'reputation farming.' This is concerning because it could eventually help attackers gain trust in important software projects and inject malicious code through supply chain attacks (attacks targeting the software that other programs depend on), something that normally takes years to accomplish but could now happen much faster.

CSO Online
06

Infostealer Steals OpenClaw AI Agent Configuration Files and Gateway Tokens

securityprivacy
Feb 16, 2026

Researchers discovered that an information stealer (malware that secretly copies sensitive files) infected a victim and stole OpenClaw AI agent configuration files, including gateway tokens (authentication credentials), cryptographic keys, and the agent's operational guidelines. This marks a shift in malware tactics from stealing browser passwords to targeting AI agents, and attackers could use stolen tokens to impersonate victims or access their local AI systems if ports are exposed.

Fix: OpenClaw maintainers announced a partnership with VirusTotal to scan for malicious skills (plugins) uploaded to ClawHub, establish a threat model, and add the ability to audit for potential misconfigurations.

The Hacker News
07

Infostealer malware found stealing OpenClaw secrets for first time

securityprivacy
Feb 16, 2026

Infostealer malware (malware designed to steal sensitive files and credentials) has been spotted for the first time stealing configuration files from OpenClaw, a local AI agent framework that manages tasks and accesses online services on a user's machine. The stolen files contain API keys, authentication tokens, and other secrets that could allow attackers to impersonate users and access their cloud services and personal data.

Fix: For nanobot (a similar AI assistant framework), the development team released fixes for a max-severity vulnerability tracked as CVE-2026-2577 in version 0.13.post7. No mitigation or update is mentioned in the source for OpenClaw itself.

BleepingComputer
08

AI chatbot firms face stricter regulation in online safety laws protecting children in the UK

policysafety
Feb 16, 2026

The UK government is closing a legal gap by bringing AI chatbots like ChatGPT, Gemini, and Copilot under its Online Safety Act, requiring them to remove illegal content or face fines and being blocked. This move follows criticism of X's Grok chatbot for spreading sexually explicit images, and reflects broader efforts to protect children from harmful online content through new regulations on age limits, infinite scrolling, and VPN access.

CNBC Technology
09

Rodney and Claude Code for Desktop

industry
Feb 16, 2026

Claude Code for Desktop is Anthropic's cloud-based AI coding tool that runs in a container environment (a isolated computing space), accessible through native iPhone and Mac apps. The desktop app lets users see images that Claude is analyzing through a Read /path/to/image tool, providing visual previews of what the AI is working on in real time. The iPhone app currently lacks this image display feature, though the user has requested it.

Simon Willison's Weblog
10

The Promptware Kill Chain

securityresearch
Feb 16, 2026

Attacks on AI language models have evolved beyond simple prompt injection (tricking an AI by hiding instructions in its input) into a more complex threat called "promptware," which follows a structured seven-step kill chain similar to traditional malware. The fundamental problem is that large language models (LLMs, AI systems trained on massive amounts of text) treat all input the same way, whether it's a trusted system command or untrusted data from a retrieved document, creating no architectural boundary between them.

Schneier on Security
Prev1...107108109110111...274Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026