aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,718
[LAST_24H]
40
[LAST_7D]
176
Daily BriefingTuesday, March 31, 2026
>

OpenAI Closes Record $122 Billion Funding Round: OpenAI raised $122 billion at an $852 billion valuation with backing from SoftBank, Amazon, and Nvidia, now serving 900 million weekly users and generating $2 billion monthly revenue as it prepares for a potential IPO despite not yet being profitable.

>

Multiple Critical FastGPT Vulnerabilities Disclosed: FastGPT versions before 4.14.9.5 contain three high-severity flaws including CVE-2026-34162 (unauthenticated proxy endpoint allowing unauthorized server-side requests), CVE-2026-34163 (SSRF vulnerability letting attackers scan internal networks and access cloud metadata), and issues with MCP tools endpoints that accept user URLs without validation.

>

Latest Intel

page 91/272
VIEW ALL
01

Cybersecurity stocks drop for a second day as new Anthropic tool fuels AI disruption fears

industry
Feb 23, 2026

Cybersecurity stock prices fell sharply after Anthropic announced a new AI tool for its Claude model that can scan software code for vulnerabilities and suggest fixes, causing investors to worry that AI might replace traditional cybersecurity services. However, some analysts argue the threat is limited, noting that while AI could improve efficiency in specific tasks like code scanning, it cannot yet replace full end-to-end security platforms (complete systems that handle all stages of protecting against attacks).

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026

Claude SDK Filesystem Sandbox Escapes: Both TypeScript (CVE-2026-34451) and Python (CVE-2026-34452) versions of Claude SDK had vulnerabilities in their filesystem memory tools where attackers could use prompt injection or symlinks to access files outside intended sandbox directories, potentially reading or modifying sensitive data they shouldn't access.

>

Axios npm Supply Chain Attack Impacts Millions: Attackers compromised the npm account of Axios' lead maintainer and published malicious versions containing a remote access trojan (malware that gives attackers control over infected systems), affecting a library downloaded 100 million times per week and used in 80% of cloud environments before being detected and removed within hours.

>

Claude AI Discovers RCE Bugs in Vim and Emacs: Claude AI helped identify remote code execution vulnerabilities (where attackers can run commands on systems they don't own) in Vim and GNU Emacs text editors that trigger simply by opening a malicious file, exploiting modeline handling in Vim and automatic Git operations in Emacs.

CNBC Technology
02

Does Big Tech actually care about fighting AI slop?

safetypolicy
Feb 23, 2026

Instagram's leader Adam Mosseri warned that AI can now convincingly fake almost any content, making it hard for creators to stand out with authentic material. He proposed solving this by having camera manufacturers cryptographically sign images (using math-based codes that prove an image wasn't altered) at the moment they're captured, creating a verifiable record of what's real versus AI-generated.

Fix: Camera manufacturers will cryptographically sign images at capture, creating a chain of custody to establish a trustworthy system for determining what's not AI.

The Verge (AI)
03

Anthropic CEO Dario Amodei to meet with Defense Secretary Pete Hegseth on AI DoD model use

policy
Feb 23, 2026

Anthropic's CEO is meeting with the U.S. Defense Secretary to resolve disagreements over how the military can use the company's AI models (large language models trained to understand and generate text). Anthropic wants guarantees its technology won't be used for autonomous weapons (systems that make decisions without human control) or domestic surveillance, while the Department of Defense wants permission to use the models for any lawful purpose without restrictions.

CNBC Technology
04

How AI agents could destroy the economy

policyindustry
Feb 23, 2026

Citrini Research published a scenario describing how AI agents (autonomous AI systems that can make decisions and take actions independently) could trigger economic collapse by replacing white-collar workers with cheaper AI alternatives, creating a negative feedback loop where job losses reduce consumer spending, forcing companies to invest even more in AI to survive. The scenario imagines unemployment doubling and stock market value falling by a third within two years, though the researchers present it as a thought experiment rather than a prediction.

TechCrunch
05

Defense Secretary summons Anthropic’s Amodei over military use of Claude

policy
Feb 23, 2026

The U.S. Defense Secretary is meeting with Anthropic's CEO to pressure the company into allowing military use of Claude (Anthropic's AI system) for mass surveillance and autonomous weapons (weapons that can fire without human approval). Anthropic has refused these uses, and the Pentagon is threatening to label it a "supply chain risk" (a designation that would ban it from government contracts), which could void their $200 million military contract and force other Pentagon partners to stop using Claude.

TechCrunch
06

OpenAI lands multiyear deals with consulting giants in enterprise push

industry
Feb 23, 2026

OpenAI announced partnerships with four major consulting firms (Accenture, Boston Consulting Group, Capgemini, and McKinsey) to help deploy its enterprise AI platform called Frontier, which acts as an intelligence layer that connects different systems and data within organizations to help companies manage and build AI agents (tools that can independently complete tasks). These consulting partnerships aim to accelerate AI adoption for enterprise customers by combining OpenAI's technology with the consulting firms' existing relationships and deep knowledge of how businesses operate.

CNBC Technology
07

Tariffs, flight cancellations, OpenAI's spending reset and more in Morning Squawk

industry
Feb 23, 2026

This newsletter covers multiple business and policy topics, including the Supreme Court striking down Trump's tariffs (duties, or taxes on imported goods) in a 6-3 decision, followed by Trump announcing a new 15% global tariff the next day. A major winter blizzard caused airlines to cancel 15% of U.S. flights on Monday, and Trump called on Netflix to fire board member Susan Rice.

CNBC Technology
08

Secure and Efficient Model Training Framework for Multiuser Semantic Communications via Over-the-Air Mixup

researchsecurity
Feb 23, 2026

This paper presents SIMix, a training framework for systems where multiple users learn AI models together over wireless networks while protecting their private data. The system uses Over-the-Air Mixup (OAM, a technique that combines data from multiple users through wireless transmission to hide sensitive information) and groups users strategically to reduce communication needs by up to 25% while defending against model inversion attacks (attempts to reconstruct private training data from a trained model) and label inference attacks (guessing what category a user's data belongs to).

Fix: The paper proposes integrating Over-the-Air Mixup with label-aware user grouping, including a closed-form Tx-Rx scaling optimization that minimizes mean square error under channel noise, and an extended max-clique algorithm that dynamically partitions users into groups with minimal intra-label similarity to reduce model inversion attack success rates.

IEEE Xplore (Security & AI Journals)
09

PPOM-Attack: A Substitute Model-Free Perturbation Prediction and Optimization Method for Black-Box Adversarial Attack Against Face Recognition

securityresearch
Feb 23, 2026

Researchers developed PPOM-Attack, a method to fool face recognition (FR) systems by generating adversarial images (slightly altered photos that trick AI into misidentifying someone). Unlike earlier attacks that used substitute models (simpler AI systems trained to mimic the target system), PPOM-Attack directly queries the real face recognition system to learn how to create effective perturbations (tiny pixel changes), achieving 21.7% higher success rates while keeping the altered images looking natural.

IEEE Xplore (Security & AI Journals)
10

PromptFuzz: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs

securityresearch
Feb 23, 2026

Prompt injection attacks (tricking an AI by hiding malicious instructions in its input) pose a serious security risk to Large Language Models, as attackers can overwrite a model's original instructions to manipulate its responses. Researchers developed PromptFuzz, a testing framework that uses fuzzing techniques (automatically generating many variations of input data to find weaknesses) to systematically evaluate how well LLMs resist these attacks. Testing showed that PromptFuzz was highly effective at finding vulnerabilities, ranking in the top 0.14% of attackers in a real competition and successfully exploiting 92% of popular LLM-integrated applications tested.

IEEE Xplore (Security & AI Journals)
Prev1...8990919293...272Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026