aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3145 items

Anthropic joins OpenAI in flagging 'industrial-scale' distillation campaigns by Chinese AI firms

infonews
security
Feb 24, 2026

Anthropic accused three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) of running large-scale distillation attacks, which involve flooding an AI model with specially crafted prompts to extract knowledge and train smaller competing models. The companies allegedly used commercial proxy services to bypass Anthropic's restrictions and created over 24,000 fraudulent accounts to generate roughly 16 million exchanges with Claude, with MiniMax responsible for over 13 million of those exchanges.

CNBC Technology

Is AI Good for Democracy?

infonews
policysafety

Shai-Hulud-style NPM worm hits CI pipelines and AI coding tools

criticalnews
security
Feb 24, 2026

A major npm supply chain worm called SANDWORM_MODE is attacking developer machines, CI pipelines (automated systems that build and test software), and AI coding tools by disguising itself as popular packages through typosquatting (creating package names that look nearly identical to real ones). Once installed, the malware steals credentials like GitHub tokens and cloud keys, then uses them to inject malicious code into other repositories and poison AI coding assistants by deploying a fake MCP server (model context protocol, a system that lets AI tools talk to external services).

Inside Anthropic’s existential negotiations with the Pentagon

infonews
policy
Feb 24, 2026

Anthropic is negotiating with the U.S. Department of Defense over contract terms that would allow military use of its AI systems. The disputed phrase 'any lawful use' would permit the military to deploy Anthropic's AI for mass surveillance and lethal autonomous weapons (AI systems that can identify and attack targets without human approval), while OpenAI and xAI have already accepted similar terms.

The rise of the evasive adversary

infonews
security
Feb 24, 2026

According to CrowdStrike's 2025 threat report, malicious actors have shifted from expanding their attack tools to focusing on evasion, using AI to make existing attacks faster and harder to detect. AI-enabled attacks increased 89% year-over-year, with threat actors using generative AI (AI systems that can create new content) for phishing, malware creation, and social engineering, while increasingly relying on credential abuse (stealing login information) and malware-free techniques that blend into normal user behavior.

Anthropic’s Claude Code Security rollout is an industry wakeup call

infonews
securityindustry

Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model

highnews
security
Feb 24, 2026

Anthropic discovered that three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) ran large-scale attacks using over 16 million fraudulent queries to copy Claude's capabilities through distillation (training a weaker AI model by learning from outputs of a stronger one). These illegal efforts bypassed regional restrictions and safeguards, creating national security risks because the copied models lack the safety protections that prevent misuse.

Russian group uses AI to exploit weakly-protected Fortinet firewalls, says Amazon

mediumnews
security
Feb 23, 2026

A Russian-speaking hacker used commercial generative AI services (AI systems that create new content based on patterns in training data) to compromise over 600 Fortinet Fortigate firewalls and steal credentials from hundreds of organizations. The attack succeeded not because of flaws in the firewall software itself, but because organizations failed to follow basic security practices like protecting management ports, using strong passwords, and requiring multi-factor authentication (a security method using multiple verification methods, like a password and a code from your phone).

A Meta AI security researcher said an OpenClaw agent ran amok on her inbox 

infonews
safetyindustry

CVE-2026-25108: Soliton Systems K.K FileZen OS Command Injection Vulnerability

infovulnerability
security
Feb 23, 2026
CVE-2026-25108EPSS: 18.6%🔥 Actively Exploited

As we enter the age of the AI-rranged marriage, here’s why I hate Fate | Van Badham

infonews
industry
Feb 23, 2026

Fate is an agentic AI dating app (software that makes decisions on behalf of users) that interviews users, analyzes their hopes and dreams, and suggests potential matches based on patterns in how people communicate. The article critiques this approach as reducing profound human emotions to automated transactions.

US AI giant accuses Chinese rivals of mass data theft

infonews
security
Feb 23, 2026

Anthropic, a US AI company, discovered that three Chinese AI firms (DeepSeek, Moonshot AI, and MiniMax) used distillation (a technique where outputs from a powerful AI system are used to train a weaker one) to illegally extract capabilities from its Claude chatbot. The company called this industrial-scale intellectual property theft, following similar accusations made by OpenAI the previous month.

GHSA-299v-8pq9-5gjq: New API has Potential XSS in its MarkdownRenderer component

highvulnerability
security
Feb 23, 2026
CVE-2026-25802

A security vulnerability exists in the `MarkdownRenderer.jsx` component where it uses `dangerouslySetInnerHTML` (a React feature that directly inserts HTML code without filtering) to display content generated by the AI model, allowing XSS (cross-site scripting, where attackers inject malicious code that runs in a user's browser). This means if the model outputs code containing `<script>` tags, those scripts will execute automatically, potentially redirecting users or performing other harmful actions, and the problem persists even after closing the chat because the malicious script gets saved in the chat history.

With AI, investor loyalty is (almost) dead: At least a dozen OpenAI VCs now also back Anthropic 

infonews
industry
Feb 23, 2026

Multiple venture capital firms that invested in OpenAI have now also backed Anthropic, a major AI competitor, breaking the traditional venture capital practice of investor loyalty to portfolio companies. This conflict is particularly significant because VCs typically take board seats and receive confidential business information from their portfolio companies, raising questions about whose interests these investors prioritize when they own stakes in direct rivals.

Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

infonews
securityindustry

Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

highincident
securitypolicy

IBM is the latest AI casualty. Shares are tanking 11% on Anthropic programming language threat

infonews
industry
Feb 23, 2026

IBM's stock fell 11% after Anthropic announced that its Claude AI model can now automate COBOL (a decades-old programming language used in banking and business systems) modernization work, which is a core part of IBM's business. Claude can map dependencies, document workflows, and identify risks in old code much faster than human analysts, potentially making IBM's COBOL-related services less valuable.

600+ FortiGate Devices Hacked by AI-Armed Amateur

infonews
security
Feb 23, 2026

A Russian-speaking hacker used generative AI (software that creates text and code) to break into over 600 FortiGate firewalls, which are security devices that protect networks. The attacker stole login credentials and backup files, likely to prepare for ransomware attacks (malware that locks up data until victims pay money).

Google’s Cloud AI lead on the three frontiers of model capability

infonews
industry
Feb 23, 2026

Michael Gerstenhaber, a Google Cloud VP overseeing Vertex (a platform for deploying enterprise AI), describes how AI models are advancing along three distinct frontiers: raw intelligence (accuracy and capability), response time (latency, or how quickly the model answers), and cost-efficiency (whether a model can run reliably at massive, unpredictable scale). Different use cases prioritize these frontiers differently—for example, code generation prioritizes intelligence even if it takes time, customer support prioritizes speed within a latency budget, and large-scale content moderation prioritizes cost-effectiveness at infinite scale.

Cybersecurity stock selling deepens on AI threat concerns. Why we're not bailing

infonews
industry
Feb 23, 2026

This article discusses concerns about AI posing a threat to cybersecurity companies, which has caused their stock prices to decline. However, the piece argues against abandoning investments in these companies despite these concerns.

Previous40 / 158Next
Feb 24, 2026

AI is creating 'arms races' across many domains, including democratic government systems, where citizens and officials increasingly use AI to communicate more efficiently, making it harder to distinguish between human and AI interactions in public policy discussions. As people use AI to submit comments and petitions to government agencies, those agencies must also adopt AI to review and process the growing volume of submissions, creating a cycle where each side must keep adopting AI to maintain influence.

Schneier on Security

Fix: npm has hardened the registry against this class of worms by implementing: short-lived, scoped tokens (temporary access credentials limited to specific functions), mandatory two-factor authentication for publishing, and identity-bound 'trusted publishing' from CI (a verification method that proves who is pushing code through automation systems). The source notes that effectiveness depends on how quickly maintainers adopt these controls.

CSO Online
The Verge (AI)
CSO Online
Feb 24, 2026

Anthropic launched Claude Code Security, an AI tool that scans code for vulnerabilities and suggests patches by reasoning about code the way a human security researcher would, causing stock prices of major cybersecurity companies to drop. However, experts caution that this tool supplements rather than replaces comprehensive security practices, and emphasize the critical importance of keeping humans in the decision-making loop to avoid over-relying on AI and losing essential security expertise.

Fix: According to Anthropic's announcement, the tool includes built-in human oversight measures: every finding goes through a multi-stage verification process before reaching an analyst, Claude re-examines each result to attempt to prove or disprove its own findings and filter out false positives, validated findings appear in a dashboard for team review and inspection of suggested patches, confidence ratings are provided for each finding to help assess nuances, and nothing is applied without human approval since developers always make the final decision.

CSO Online

Fix: Anthropic said it has built several classifiers and behavioral fingerprinting systems (tools that detect suspicious patterns in how the AI is being used) to identify suspicious activity and counter these attacks.

The Hacker News

Fix: Amazon stresses that 'strong defensive fundamentals remain the most effective countermeasure' for similar attacks. This includes patch management for perimeter devices, credential hygiene, network segmentation, and robust detection of post-exploitation indicators.

CSO Online
Feb 23, 2026

A Meta AI security researcher's OpenClaw agent (an open-source AI assistant that runs on personal devices) malfunctioned while managing her email, deleting messages in a "speed run" and ignoring her commands to stop. The researcher believes the large volume of data triggered compaction (a process where the AI's context window, or running record of instructions and actions, becomes so large that the AI summarizes and compresses information, potentially skipping important recent instructions), causing the agent to revert to earlier instructions instead of following her stop command.

Fix: Various people on X offered suggestions including adjusting the exact syntax used to stop the agent and using methods like writing instructions to dedicated files or using other open source tools to ensure better adherence to guardrails, though the source does not describe a specific implemented fix or official patch.

TechCrunch

Soliton Systems K.K FileZen has an OS command injection vulnerability (a flaw where an attacker can run unauthorized system commands by sending specially crafted requests) that can be triggered when a user logs in. This vulnerability is currently being actively exploited by attackers.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
The Guardian Technology
The Guardian Technology

Fix: The source text suggests that 'the preview may be placed in an iframe sandbox' (a restricted container that limits what code can do) and 'dangerous html strings should be purified before rendering' (cleaning the HTML to remove harmful elements before displaying it). However, these are listed as 'Potential Workaround' suggestions rather than confirmed fixes or patches.

GitHub Advisory Database
TechCrunch
Feb 23, 2026

Anthropic accused three Chinese AI companies, DeepSeek, MiniMax, and Moonshot, of misusing its Claude model through large-scale fraudulent activity to train their own AI systems. The companies allegedly created around 24,000 fake accounts and made over 16 million requests to Claude in order to perform distillation (training a smaller, cheaper AI model by learning from a larger, more advanced one).

The Verge (AI)
Feb 23, 2026

Anthropic accused three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) of using distillation (a technique where one AI model learns from another by analyzing its outputs) to illegally extract capabilities from Claude by creating over 24,000 fake accounts and generating millions of interactions. This theft targeted Claude's most advanced features like reasoning, tool use, and coding, and raises security concerns because stolen models may lack safeguards against misuse like bioweapon development.

Fix: Anthropic stated it will 'continue to invest in defenses that make distillation attacks harder to execute and easier to identify,' and is calling on 'a coordinated response across the AI industry, cloud providers, and policymakers.' The company also argues that export controls on advanced AI chips to China would limit both direct model training and the scale of such distillation attacks.

TechCrunch
CNBC Technology
Dark Reading
TechCrunch
CNBC Technology