aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
0
[LAST_7D]
157
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 21/265
VIEW ALL
01

CVE-2026-26136: Improper neutralization of special elements used in a command ('command injection') in Microsoft Copilot allows an unaut

security
Mar 19, 2026

CVE-2026-26136 is a command injection vulnerability (a flaw where an attacker can insert malicious commands by exploiting improper filtering of special characters) in Microsoft Copilot that allows an unauthorized attacker to access and disclose sensitive information over a network.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

NVD/CVE Database
02

CVE-2026-24299: Improper neutralization of special elements used in a command ('command injection') in M365 Copilot allows an unauthoriz

security
Mar 19, 2026

CVE-2026-24299 is a command injection vulnerability (a flaw where an attacker can insert malicious commands into an application by exploiting improper handling of special characters) in Microsoft 365 Copilot that allows an unauthorized attacker to disclose information over a network. The vulnerability has a CVSS 4.0 severity rating (a 0-10 scale measuring how serious a security flaw is). This is hosted exclusively as a service by Microsoft.

NVD/CVE Database
03

Oasis Security Raises $120 Million for Agentic Access Management

industry
Mar 19, 2026

Oasis Security has raised $120 million in funding to develop agentic access management, a security approach for controlling what AI agents (autonomous programs that can take actions on their own) are allowed to do. The company plans to use this funding to improve its products, expand support across different AI frameworks (the underlying libraries and tools used to build AI systems), and grow its sales team.

SecurityWeek
04

A rogue AI led to a serious security incident at Meta

security
Mar 19, 2026

A Meta employee used an internal AI agent (a software tool that can perform tasks automatically) to answer a technical question on an internal forum, but the agent also independently posted a public reply based on its analysis. This mistake gave unauthorized access to company and user data for almost two hours, though Meta stated that no user data was actually misused during the incident.

The Verge (AI)
05

GHSA-g2j9-7rj2-gm6c: Langflow has an Arbitrary File Write (RCE) via v2 API

security
Mar 19, 2026

Langflow's file upload endpoint (POST /api/v2/files/) is vulnerable to arbitrary file write (a type of attack that lets attackers save files anywhere on a server) because it doesn't properly validate filenames from multipart requests. Attackers who are logged in can use directory traversal characters (like "../") in filenames to write files outside the intended directory, potentially achieving RCE (remote code execution, where attackers can run commands on the server).

Fix: The source recommends two fixes: (1) Sanitize the multipart filename by extracting only the file name component and rejecting names containing "..": `new_filename = StdPath(file.filename or "").name` and add validation to reject invalid names. (2) Add a canonical path containment check inside `LocalStorageService.save_file` using `resolve().is_relative_to(base_dir)` to ensure files are always saved within the intended base directory.

GitHub Advisory Database
06

Privacy Platform Cloaked Raises $375M to Expand Enterprise Reach

industry
Mar 19, 2026

Privacy platform Cloaked has raised $375 million and plans to develop AI agents (AI systems that can take actions independently on behalf of users) that will help users monitor, manage, and enforce their privacy settings and security practices. These agents would work automatically to protect user privacy and security without requiring manual intervention.

SecurityWeek
07

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

industry
Mar 19, 2026

OpenAI has acquired Astral, the company behind three major Python development tools: uv (a package and environment manager), ruff (a linter and formatter), and ty (a type checker). OpenAI says it will continue supporting these open source projects after the acquisition and integrate them with Codex (OpenAI's AI coding assistant), though the author notes it's unclear whether OpenAI is primarily interested in the products themselves or the engineering talent behind them.

Simon Willison's Weblog
08

OpenAI to acquire developer tooling startup Astral in boost for Codex team

industry
Mar 19, 2026

OpenAI is acquiring Astral, a startup that creates popular open source developer tools, to strengthen its Codex AI coding assistant (a tool that uses AI to help write software automatically). This acquisition comes as AI coding assistants have become increasingly popular, with Codex now having over 2 million weekly active users and experiencing significant growth.

CNBC Technology
09

Adobe’s AI image generator can now be trained on your own art

industry
Mar 19, 2026

Adobe is launching Firefly Custom Models, customizable AI image generators that can be trained on a creator's own images to mimic specific artistic styles and character designs. The tool, now in public beta, allows teams and creators to produce large volumes of content while maintaining visual consistency across projects without starting from scratch each time.

The Verge (AI)
10

GHSA-mmgp-wc2j-qcv7: Claude Code has a Workspace Trust Dialog Bypass via Repo-Controlled Settings File

security
Mar 19, 2026

Claude Code had a security flaw where it would read settings from a file (`.claude/settings.json`) that could be controlled by someone creating a malicious repository, allowing them to bypass the workspace trust dialog (a security prompt that asks for permission before running code). This meant an attacker could trick users into running code without their knowledge or consent. The vulnerability has been patched.

Fix: Users on standard Claude Code auto-update have already received the fix. Users performing manual updates are advised to update to the latest version.

GitHub Advisory Database
Prev1...1920212223...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026