aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,718
[LAST_24H]
40
[LAST_7D]
176
Daily BriefingTuesday, March 31, 2026
>

OpenAI Closes Record $122 Billion Funding Round: OpenAI raised $122 billion at an $852 billion valuation with backing from SoftBank, Amazon, and Nvidia, now serving 900 million weekly users and generating $2 billion monthly revenue as it prepares for a potential IPO despite not yet being profitable.

>

Multiple Critical FastGPT Vulnerabilities Disclosed: FastGPT versions before 4.14.9.5 contain three high-severity flaws including CVE-2026-34162 (unauthenticated proxy endpoint allowing unauthorized server-side requests), CVE-2026-34163 (SSRF vulnerability letting attackers scan internal networks and access cloud metadata), and issues with MCP tools endpoints that accept user URLs without validation.

>

Latest Intel

page 93/272
VIEW ALL
01

OpenAI debated calling police about suspected Canadian shooter’s chats

safetypolicy
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Claude SDK Filesystem Sandbox Escapes: Both TypeScript (CVE-2026-34451) and Python (CVE-2026-34452) versions of Claude SDK had vulnerabilities in their filesystem memory tools where attackers could use prompt injection or symlinks to access files outside intended sandbox directories, potentially reading or modifying sensitive data they shouldn't access.

>

Axios npm Supply Chain Attack Impacts Millions: Attackers compromised the npm account of Axios' lead maintainer and published malicious versions containing a remote access trojan (malware that gives attackers control over infected systems), affecting a library downloaded 100 million times per week and used in 80% of cloud environments before being detected and removed within hours.

>

Claude AI Discovers RCE Bugs in Vim and Emacs: Claude AI helped identify remote code execution vulnerabilities (where attackers can run commands on systems they don't own) in Vim and GNU Emacs text editors that trigger simply by opening a malicious file, exploiting modeline handling in Vim and automatic Git operations in Emacs.

Feb 21, 2026

OpenAI's monitoring tools flagged an 18-year-old user's chats on ChatGPT (a large language model chatbot) that described gun violence, leading to the account being banned in June 2025. The company debated whether to alert Canadian police but decided the chats didn't meet reporting criteria, though OpenAI later contacted authorities after the user allegedly killed eight people in a mass shooting in Canada.

TechCrunch
02

Suspect in Tumbler Ridge school shooting described violent scenarios to ChatGPT

safety
Feb 21, 2026

A suspect in a mass shooting in Tumbler Ridge, British Columbia had conversations with ChatGPT describing gun violence, which triggered the chatbot's automated content review system (a safety filter that flags harmful content). OpenAI employees raised concerns that these posts could indicate a real-world threat and suggested contacting authorities, but company leaders decided the posts did not pose a credible and immediate danger and did not contact law enforcement.

The Verge (AI)
03

Amazon: AI-assisted hacker breached 600 FortiGate firewalls in 5 weeks

security
Feb 21, 2026

A Russian-speaking hacker used generative AI services to breach over 600 FortiGate firewalls (network security devices) across 55 countries between January and February 2026. Rather than exploiting software flaws, the attacker scanned the internet for exposed firewall management interfaces, used brute-force attacks (trying many password combinations) with common passwords to gain access, then deployed AI-generated tools to automate reconnaissance and extract credentials from the breached networks. The attacker also targeted backup systems before attempting to deploy ransomware (malware that encrypts files and demands payment).

BleepingComputer
04

CVE-2026-27487: OpenClaw is a personal AI assistant. In versions 2026.2.13 and below, when using macOS, the Claude CLI keychain credenti

security
Feb 21, 2026

OpenClaw, a personal AI assistant, had a security flaw in versions 2026.2.13 and below on macOS where OAuth tokens (authentication credentials that prove you're logged in) could be used to inject malicious OS commands (commands that run at the operating system level) into the credential refresh process. An attacker could exploit this by crafting a specially designed token to execute arbitrary commands on the affected system.

Fix: Update to version 2026.2.14 or later. According to the source, 'This issue has been fixed in version 2026.2.14.'

NVD/CVE Database
05

Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning

securityindustry
Feb 21, 2026

Anthropic has launched Claude Code Security, a new AI feature that scans software codebases for vulnerabilities and suggests patches for human review. The tool uses AI reasoning to detect security issues that traditional scanning methods might miss, assigns severity ratings to findings, and requires human approval before any changes are made.

The Hacker News
06

Tumbler Ridge suspect's ChatGPT account banned before shooting

safetypolicy
Feb 21, 2026

OpenAI banned a ChatGPT account belonging to a mass shooting suspect in June 2025, but did not alert authorities because the account activity did not meet the company's threshold for reporting (a credible or imminent plan for serious harm). The suspect later carried out an attack in Tumbler Ridge, British Columbia in February 2026 that killed eight people, leading OpenAI to contact police after the fact and announce it would review its reporting criteria with experts.

Fix: OpenAI stated it 'is constantly reviewing its referral criteria with experts and that it is reviewing the case for improvements.' The company also noted it trains ChatGPT to 'discourage imminent real-world harm when it identifies a dangerous situation and to refuse to help people that are attempting to use the service for illegal activities.' However, OpenAI reaffirmed its policy of 'alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm.'

BBC Technology
07

Why fake AI videos of UK urban decline are taking over social media

safetypolicy
Feb 21, 2026

AI-generated fake videos showing absurd scenes of urban decline in Croydon, London are going viral on social media, with millions of views across TikTok and Instagram Reels. These deepfakes (AI-created videos that look real but are fabricated) are part of a trend called "decline porn" that portrays Western cities as overrun with immigrants and crime, often fueling racist comments and anger among viewers who believe them. The creator, known as RadialB, intentionally makes the videos look realistic to grab attention and doesn't take responsibility for how they spread divisive political narratives, despite adding small labels noting they are AI-generated.

BBC Technology
08

EC-Council Expands AI Certification Portfolio to Strengthen U.S. AI Workforce Readiness and Security

policyindustry
Feb 20, 2026

EC-Council launched four new AI certifications and an updated executive program to address a major gap: AI technology is being adopted much faster than the workforce is being trained to secure and manage it. The credentials (covering AI essentials, program management, offensive security testing, and responsible governance) are built around a framework called Adopt. Defend. Govern. that helps organizations deploy, secure, and oversee AI systems responsibly as they move from experimental projects to critical infrastructure.

The Hacker News
09

OpenAI considered alerting Canadian police about school shooting suspect months ago

safetypolicy
Feb 20, 2026

OpenAI detected a user account (Jesse Van Rootselaar) engaged in behavior suggesting violent activities through its abuse detection system, but decided the account activity did not meet the threshold for reporting to law enforcement because there was no imminent and credible risk of serious physical harm. Months later, the same person committed a school shooting in British Columbia that killed eight people, after which OpenAI retroactively contacted the Royal Canadian Mounted Police with information about the account and its usage.

The Guardian Technology
10

Compromised npm package silently installs OpenClaw on developer machines

security
Feb 20, 2026

A compromised npm publish token (a credential that allows someone to upload code to a package repository) was used to push a malicious update to the Cline CLI (command-line tool), which secretly installed OpenClaw, an AI agent with broad system access, on developers' machines without their knowledge. The malicious package sat on the registry for eight hours before being removed, and OpenClaw itself has a history of security vulnerabilities including prompt injection attacks (tricking an AI by hiding instructions in its input) and authentication bypasses.

Fix: For developers who installed or updated Cline CLI during the compromised window on February 17, Socket advises: (1) Update to the latest version by running 'npm install -g cline@latest'; (2) If on version 2.3.0, update to 2.4.0 or higher; (3) Check for and immediately remove OpenClaw if it wasn't intentionally installed.

CSO Online
Prev1...9192939495...272Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026