aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,700
[LAST_24H]
25
[LAST_7D]
169
Daily BriefingTuesday, March 31, 2026
>

FastGPT Authentication Bypass Enables Server-Side Proxying: FastGPT versions before 4.14.9.5 have a critical vulnerability (CVE-2026-34162) where an HTTP testing endpoint lacks authentication and acts as an open proxy, letting unauthenticated attackers make requests on behalf of the FastGPT server. A separate high-severity SSRF vulnerability (CVE-2026-34163) in the same platform's MCP tools endpoints allows authenticated attackers to trick the server into scanning internal networks and accessing cloud metadata services.

>

Command Injection Flaws Hit MLflow and OpenAI Codex: MLflow's model serving feature has a high-severity command injection vulnerability (CVE-2026-0596) where attackers can insert shell commands through unsanitized model paths when `enable_mlserver=True`. Separately, researchers found a critical vulnerability in OpenAI Codex that could have allowed attackers to steal GitHub tokens (secret credentials for accessing repositories), which OpenAI has since patched.

Latest Intel

page 85/270
VIEW ALL
01

Anthropic won’t budge as Pentagon escalates AI dispute

policyindustry
Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
>

Prompt Injection Bypasses Safety Controls in Multiple AI Tools: Multiple AI systems are vulnerable to prompt injection attacks (where attackers hide malicious instructions in input to trick the AI): the 1millionbot Millie chatbot (CVE-2026-4399) can be tricked using Boolean logic to bypass restrictions, Sixth's AI terminal tool (CVE-2026-30310) can be fooled into running dangerous commands without user approval, and CrewAI framework vulnerabilities allow attackers to chain exploits and escape sandboxes (restricted environments meant to contain AI actions).

>

Google Cloud Vertex AI Service Agents Had Excessive Default Permissions: Researchers found that AI agents running on Google Cloud's Vertex AI platform could be weaponized as "double agents" because the default service agent accounts (special accounts that run AI services) had excessive permissions, allowing attackers to steal credentials, access private code repositories, and reach internal infrastructure. Google responded by updating their documentation to better explain how Vertex AI uses resources and accounts.

Feb 24, 2026

Anthropic, an AI company, is refusing to give the U.S. military unrestricted access to its AI model because of concerns about mass surveillance and autonomous weapons, despite the Pentagon threatening to declare the company a "supply chain risk" (a serious designation usually reserved for foreign adversaries) or invoke the Defense Production Act (a law giving the president power to force companies to prioritize production for national defense). The dispute highlights tension between corporate AI safety policies and government demands for military access, with experts warning that using these extreme measures could signal the U.S. is becoming unstable for business.

TechCrunch
02

Anthropic faces Friday deadline in Defense AI clash with Hegseth

policy
Feb 24, 2026

Defense Secretary Pete Hegseth has given Anthropic (an AI company that develops Claude models) until Friday to allow the military broad access to its AI systems, threatening to label the company a 'supply chain risk' (a designation that would require DoD vendors to stop using Anthropic's products) or invoke the Defense Production Act (a law allowing the president to control domestic industries for national security) if it refuses. Anthropic wants safeguards preventing its models from being used for autonomous weapons or mass surveillance, while the DoD wants unrestricted access to 'all lawful use cases' without limitations.

CNBC Technology
03

Why AMD's megadeal with Meta shows Nvidia is still the best game in town

industry
Feb 24, 2026

N/A -- This content is a footer/navigation page from CNBC with no substantive article text about AMD, Meta, Nvidia, or any AI/LLM-related technical issue. The provided material contains only website metadata, subscription prompts, and legal information.

CNBC Technology
04

Cursor announces major update to AI agents as coding tool battle heats up

industry
Feb 24, 2026

Cursor, an AI coding tool startup, announced updates to its AI agents (software that can complete tasks automatically on a user's behalf) that allow them to test changes, run multiple tasks in parallel on cloud-based virtual machines (remote computers), and work across different platforms like Slack and GitHub. The update aims to help Cursor compete with rivals like OpenAI and Anthropic in the rapidly growing market for AI-powered coding assistants.

CNBC Technology
05

RoguePilot Flaw in GitHub Codespaces Enabled Copilot to Leak GITHUB_TOKEN

securitysafety
Feb 24, 2026

A vulnerability called RoguePilot in GitHub Codespaces allowed attackers to inject hidden malicious instructions into GitHub issues, which GitHub Copilot (an AI code assistant) would automatically execute when a developer opened a Codespace from that issue, potentially leaking the GITHUB_TOKEN (a credential that grants access to repositories). The flaw is an example of prompt injection (tricking an AI by hiding instructions in its input), and attackers could hide their malicious prompts using HTML comments to avoid detection.

Fix: The vulnerability has since been patched by Microsoft following responsible disclosure.

The Hacker News
06

OpenAI COO says ‘we have not yet really seen AI penetrate enterprise business processes’

industry
Feb 24, 2026

OpenAI's COO Brad Lightcap stated that AI has not yet been widely adopted into enterprise business processes at scale, despite powerful AI systems being available to individual users. To address this, OpenAI launched a new platform called OpenAI Frontier, which allows enterprises to build and manage agents (AI systems that can perform tasks autonomously) and helps complex organizations integrate AI into their workflows by measuring success through business outcomes rather than just user seat licenses.

TechCrunch
07

Microsoft adds Copilot data controls to all storage locations

securityprivacy
Feb 24, 2026

Microsoft is expanding data loss prevention (DLP, rules that block AI from accessing sensitive documents) controls to protect files stored on local devices, not just in cloud storage like SharePoint or OneDrive. The change, rolling out between March and April 2026, will prevent the Microsoft 365 Copilot AI assistant from reading or processing documents marked as confidential. This update addresses a recent bug where Copilot Chat accidentally read confidential emails despite DLP protections being active.

Fix: Microsoft will deploy the enhancement through the Augmentation Loop (AugLoop, an Office component that helps Copilot access documents) between late March and late April 2026. The fix enables Office clients to provide sensitivity labels directly to AugLoop rather than requiring a call to Microsoft Graph using file URLs, allowing DLP enforcement to apply uniformly across all storage locations, including local files. Organizations with DLP policies already configured to block Copilot from processing sensitivity-labeled content will have this protection automatically enabled without requiring administrative action or changes.

BleepingComputer
08

Software stocks rebound as Anthropic announces new partnerships

industry
Feb 24, 2026

Anthropic announced new partnerships and updates to Claude (its AI assistant), allowing companies to integrate it into enterprise software tools like Slack, Gmail, and Salesforce. This announcement reassured investors that AI won't completely replace existing software systems, causing software and cybersecurity stocks to rebound after recent declines driven by fears that AI tools could disrupt traditional software businesses.

CNBC Technology
09

Anthropic’s Claude Cowork is plugging AI into more boring enterprise stuff

industry
Feb 24, 2026

Anthropic announced updates to Claude Cowork, an AI tool that helps with office tasks, allowing it to connect with popular apps like Google Workspace, Docusign, and WordPress through new plug-ins. These plug-ins can automate work across different fields such as HR, design, and finance, and Claude can now handle multi-step tasks across Excel and PowerPoint by passing context between the two applications.

The Verge (AI)
10

Oura launches a proprietary AI model focused on women’s health

industry
Feb 24, 2026

Oura, a health tracking company, released a custom AI model designed specifically for women's health questions, powering its chatbot called Oura Advisor. The model uses established medical research reviewed by doctors and combines it with users' biometric data (measurements like heart rate and sleep patterns) to provide personalized guidance on topics like menstrual cycles and menopause. The company emphasizes the model is hosted on its own servers and designed to be supportive rather than replace actual medical doctors.

TechCrunch
Prev1...8384858687...270Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026