aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
5
[LAST_7D]
161
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 5/265
VIEW ALL
01

Judge sides with Anthropic to temporarily block the Pentagon’s ban

policy
Mar 26, 2026

Anthropic won a court order that temporarily blocks the Pentagon's ban on the company from government contracts. The judge ruled that the Pentagon unfairly blacklisted Anthropic for publicly criticizing the government's contracting decisions, which violates free speech rights (the First Amendment, which protects people's right to speak publicly).

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

The Verge (AI)
02

CVE-2026-27893: vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to versio

security
Mar 26, 2026

vLLM (a tool that runs and serves large language models) has a vulnerability in versions 0.10.1 through 0.17.x where two model files ignore a user's security setting that disables remote code execution (the ability to run code from outside sources). This means attackers could run malicious code through model repositories even when the user explicitly turned off that capability.

Fix: Upgrade to version 0.18.0, which patches the issue.

NVD/CVE Database
03

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

security
Mar 26, 2026

F5 BIG-IP APM (a network access management tool) contains an unspecified vulnerability that allows attackers to achieve remote code execution (the ability to run commands on a system they don't own). This vulnerability is actively being exploited by real attackers in the wild, making it an urgent security concern.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Check for signs of potential compromise on all internet accessible F5 products affected by this vulnerability. Consult F5's official guidelines and the referenced knowledge base articles at https://my.f5.com/manage/s/article/K000156741, https://my.f5.com/manage/s/article/K000160486, and https://my.f5.com/manage/s/article/K11438344 to assess exposure and mitigate risks.

CISA Known Exploited Vulnerabilities
04

Anthropic wins preliminary injunction in DOD fight as judge cites 'First Amendment retaliation'

policy
Mar 26, 2026

A federal judge granted Anthropic a preliminary injunction, blocking the Trump administration's ban on federal agencies using the company's Claude AI models and its Pentagon blacklisting as a supply chain risk (a designation claiming use of a company's technology threatens national security). The judge ruled the administration's actions constituted First Amendment retaliation for Anthropic publicly disagreeing with the government's contracting decisions, though a final verdict in the case could take months.

CNBC Technology
05

Federal judge sides with Anthropic in first round of standoff with Pentagon

policy
Mar 26, 2026

Anthropic won a temporary legal victory when a federal judge ordered a pause on the Department of Defense's punishment of the company, which had refused to let the military use its Claude AI model in autonomous weapons systems (systems that can make attack decisions without human control). Anthropic claimed the government violated its free speech rights by declaring it a supply chain risk (a company whose products could be exploited to harm national security) and blocking agencies from using its technology.

The Guardian Technology
06

Preparing for agentic AI: A financial services approach

securitypolicy
Mar 26, 2026

Financial institutions deploying agentic AI (autonomous AI systems that make decisions and take actions independently) must add AI-specific security controls beyond traditional frameworks like ISO 27001 and NIST, because these systems' autonomous nature and non-deterministic behavior introduce unique risks. The source recommends two critical capabilities: comprehensive observability (clear visibility into what AI agents do and why) and fine-grained access controls (limiting what tools and actions each agent can use), supported by seven design principles including human-AI security homology (applying human oversight rules to AI agents) and modular agent workflow architecture.

Fix: The source provides design principles and implementation guidance rather than explicit patches or updates. It recommends: (1) implementing agent identities with role and attribute-based permissions; (2) adding logging and behavioral monitoring; (3) requiring supervision for critical actions; (4) defining agent scope in workflows; (5) applying segregation of agent duties; (6) using maker-checker verification (where one agent proposes an action and another verifies it); and (7) implementing change and incident management. The source also advises to 'consult with your compliance and legal teams to determine specific requirements for your situation' and notes that 'regulatory requirements establish minimum baselines, but organizational risk considerations often require additional controls.'

AWS Security Blog
07

GHSA-7xr2-q9vf-x4r5: OpenClaw: Symlink Traversal via IDENTITY.md appendFile in agents.create/update (Incomplete Fix for CVE-2026-32013)

security
Mar 26, 2026

OpenClaw has a symlink traversal vulnerability (symlink: a file that points to another file) in two API handlers (`agents.create` and `agents.update`) that use `fs.appendFile` to write to an `IDENTITY.md` file without checking if it's a symlink. An attacker can place a symlink in the agent workspace pointing to a sensitive system file (like `/etc/crontab`), and when these handlers run, they will append attacker-controlled content to that sensitive file, potentially allowing remote code execution. This is an incomplete fix for CVE-2026-32013, which only patched two other handlers but missed these two.

GitHub Advisory Database
08

Google is making it easier to import another AI’s memory into Gemini

industry
Mar 26, 2026

Google Gemini is adding new features that let users transfer their chat history and memory from other AI assistants into Gemini. The "Import Memory" tool works by copying a prompt from Gemini into your previous AI, then pasting the response back into Gemini, while "Import Chat History" lets you export all your past conversations from another AI and upload them to Gemini.

The Verge (AI)
09

Apple will reportedly allow other AI chatbots to plug into Siri

industry
Mar 26, 2026

Apple's upcoming iOS 27 update will let users choose which AI chatbot to connect with Siri (Apple's voice assistant), including options like Google's Gemini or Anthropic's Claude downloaded from the App Store. The new feature, called "Extensions," will allow users to enable or disable different chatbots across iPhones, iPads, and Macs, expanding beyond the current ChatGPT integration.

The Verge (AI)
10

CVE-2026-33623: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. PinchTab `v0.8.4` contai

security
Mar 26, 2026

PinchTab v0.8.4, a tool that lets AI agents control Chrome browsers through an HTTP server, has a command injection vulnerability on Windows where attackers can run arbitrary PowerShell commands if they have administrative access to the server's API. The vulnerability exists because the cleanup routine doesn't properly escape PowerShell metacharacters (special characters that PowerShell interprets as commands) when building cleanup commands from profile names.

Fix: Version 0.8.5 contains a patch for the issue.

NVD/CVE Database
Prev1...34567...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026