aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,757
[LAST_24H]
23
[LAST_7D]
175
Daily BriefingThursday, April 2, 2026
>

Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.

Latest Intel

page 135/276
VIEW ALL
01

Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks

securityresearch
Critical This Week5 issues
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
Dec 22, 2025

Federated Learning (FL, a method where multiple computers train an AI model together without sharing raw data) can leak private information through gradient inversion attacks (GIA, techniques that reconstruct sensitive data from the mathematical updates used in training). This paper reviews three types of GIA methods and finds that while optimization-based GIA is most practical, generation-based and analytics-based GIA have significant limitations, and proposes a three-stage defense pipeline for FL frameworks.

Fix: The source mentions 'a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection,' but does not explicitly describe what this pipeline contains or how to implement it.

IEEE Xplore (Security & AI Journals)
02

CVE-2025-68478: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.7.0, if an arbitrary p

security
Dec 19, 2025

Langflow, a tool for building AI-powered agents and workflows, has a vulnerability in versions before 1.7.0 where an attacker can specify any file path in a request to create or overwrite files anywhere on the server. The vulnerability exists because the server doesn't restrict or validate the file paths, allowing attackers to write files to sensitive locations like system directories.

Fix: Update Langflow to version 1.7.0, which fixes the issue.

NVD/CVE Database
03

CVE-2025-68477: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.7.0, Langflow provides

security
Dec 19, 2025

Langflow, a tool for building AI-powered agents and workflows, has a vulnerability in versions before 1.7.0 where its API Request component can make arbitrary HTTP requests to internal network addresses. An attacker with an API key could exploit this SSRF (server-side request forgery, where a server is tricked into making requests to unintended targets) to access sensitive internal resources like databases and metadata services, potentially stealing information or preparing further attacks.

Fix: Update to version 1.7.0 or later, which contains a patch for this issue.

NVD/CVE Database
04

Can chatbots craft correct code?

safetyresearch
Dec 19, 2025

The article argues that while AI language models (LLMs, systems trained on large amounts of text to generate responses) and traditional programming languages both increase abstraction, they differ fundamentally in a critical way: compilers are deterministic (they reliably produce the same output every time), while LLMs are nondeterministic (they produce different outputs for the same input). This matters for software security and correctness because compilers preserve the programmer's intended meaning through the translation process, but LLMs cannot guarantee they will generate code that does what you actually need.

Trail of Bits Blog
05

Evolving AI Transparency: The Journey of the AIBOM Generator and Its New Home at OWASP

securitypolicy
Dec 18, 2025

The AIBOM Generator, an open-source tool that creates an AI Software Bill of Materials (AIBOM, a structured document listing key information about an AI model like its data sources and configurations), has been moved to OWASP (a nonprofit focused on software security) to enable broader community collaboration and development. The tool helps organizations understand what's inside AI models, where they came from, and how trustworthy their documentation is, addressing a gap between rapid AI adoption and lagging transparency practices. The project is now part of the OWASP GenAI Security Project and will continue improving AI supply chain visibility through community-driven enhancements.

OWASP GenAI Security
06

CVE-2025-63389: A critical authentication bypass vulnerability exists in Ollama platform's API endpoints in versions prior to and includ

security
Dec 18, 2025

CVE-2025-63389 is a critical vulnerability in Ollama (an AI platform) versions up to v0.12.3 where API endpoints (connection points for software communication) are exposed without authentication (verification of identity), allowing attackers to remotely perform unauthorized model management operations. The vulnerability stems from missing authentication checks on critical functions.

NVD/CVE Database
07

CVE-2025-62998: Insertion of Sensitive Information Into Sent Data vulnerability in WP Messiah WP AI CoPilot allows Retrieve Embedded Sen

security
Dec 18, 2025

CVE-2025-62998 is a vulnerability in WP AI CoPilot (a WordPress plugin that adds AI features) versions 1.2.7 and earlier, where sensitive information can be unintentionally included in data sent from the plugin. This is classified as CWE-201 (insertion of sensitive information into sent data), meaning the plugin may leak private or confidential data to unintended recipients.

NVD/CVE Database
08

CVE-2025-63390: An authentication bypass vulnerability exists in AnythingLLM v1.8.5 in via the /api/workspaces endpoint. The endpoint fa

security
Dec 18, 2025

AnythingLLM v1.8.5 has a vulnerability in its /api/workspaces endpoint (a web address used to access workspace data) that skips authentication checks, allowing attackers without permission to see detailed information about all workspaces, including AI model settings, system prompts (instructions given to the AI), and other configuration details. This means someone could potentially discover sensitive workspace configurations without needing to log in.

NVD/CVE Database
09

AI Safety Newsletter #67: Trump’s preemption executive order

policy
Dec 17, 2025

President Trump issued an executive order to prevent states from regulating AI by using federal tools like funding withholding and legal challenges, aiming to replace varied state rules with a single federal framework. The order directs federal agencies, including the Attorney General and Commerce Secretary, to challenge state AI laws they view as problematic, while the FTC and FCC will issue guidance on how existing federal laws apply to AI. This action follows a year where ambitious state AI safety proposals, like New York's RAISE Act (which would require AI labs to publish safety practices and report serious incidents), were either weakened or blocked.

CAIS AI Safety Newsletter
10

Model Steganography During Model Compression

securityresearch
Dec 17, 2025

Researchers have developed a steganographic method (hiding secret data inside another medium) that embeds hidden messages into compressed neural network models (AI systems made smaller through techniques like quantization, pruning, or distillation). The approach allows a receiver with the correct extraction network to recover the hidden data while ordinary users remain unaware it exists, and the method maintains the model's performance in size, speed, and accuracy.

IEEE Xplore (Security & AI Journals)
Prev1...133134135136137...276Next
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026