aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
5
[LAST_7D]
161
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 7/265
VIEW ALL
01

Wikipedia bans AI-generated articles

policy
Mar 26, 2026

Wikipedia has banned editors from using AI to write or rewrite articles, citing violations of the site's content policies. However, the ban allows limited AI use for specific tasks like suggesting minor edits (copyedits, which are small fixes to grammar and style) and translating articles between language versions.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

The Verge (AI)
02

AI-Powered Dependency Decisions Introduce, Ignore Security Bugs

securityresearch
Mar 26, 2026

AI models frequently make errors or hallucinate (generate false or inaccurate information) when recommending which software versions to use, how to upgrade systems, or which security fixes to apply, which can create significant technical debt (accumulated costs from shortcuts and poor decisions that must eventually be addressed). These mistakes can lead developers to ignore real security bugs or choose problematic upgrade paths.

Dark Reading
03

Conntour raises $7M from General Catalyst, YC to build an AI search engine for security video systems

industry
Mar 26, 2026

Conntour is an AI-powered video search platform that uses vision-language models (AI systems trained to understand both images and text) to let security personnel search through surveillance footage using natural language queries, similar to how Google searches the web. The startup raised $7 million in funding and distinguishes itself by efficiently scaling to handle thousands of camera feeds while running on standard consumer hardware like Nvidia GPUs. The company's founders emphasize being selective about which clients they work with based on ethical and legal considerations.

TechCrunch (Security)
04

Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection via Any Website

security
Mar 26, 2026

A vulnerability called ShadowPrompt in Anthropic's Claude Chrome extension allowed attackers to inject malicious prompts (hidden instructions) into the AI without user interaction by exploiting two flaws: an overly permissive allowlist that trusted any subdomain matching *.claude.ai, and an XSS vulnerability (a security flaw allowing attackers to run malicious code) in an Arkose Labs CAPTCHA component. This zero-click attack could let attackers steal sensitive data, read conversation history, or perform actions like sending emails on behalf of the victim.

Fix: Anthropic deployed a patch to the Chrome extension (version 1.0.41) that enforces a strict origin check requiring an exact match to the domain 'claude.ai' rather than accepting any subdomain. Additionally, Arkose Labs fixed the underlying XSS flaw as of February 19, 2026.

The Hacker News
05

EU backs nude app ban and delays to landmark AI rules 

policy
Mar 26, 2026

European lawmakers voted to delay compliance deadlines for the EU AI Act, pushing back requirements for developers of high-risk AI systems (those that could seriously harm health, safety, or people's rights) until December 2027, with even later deadlines for AI used in regulated sectors like medical devices. The Parliament also backed proposals to ban nudify apps, which use AI to create fake nude images of people without consent.

The Verge (AI)
06

Creator of AI actor Tilly Norwood says she received death threats over project

safetyindustry
Mar 26, 2026

Eline van der Velden created an AI actor called Tilly Norwood (a digital twin, or an AI-generated copy of a person) and received death threats following global backlash against the project. Van der Velden stated she developed it to spark discussion about AI's impact on entertainment, but the reaction from Hollywood actors and unions was more severe than expected.

The Guardian Technology
07

OpenAI shelves erotic chatbot ‘indefinitely’

policysafety
Mar 26, 2026

OpenAI has indefinitely paused plans to release an 'adult mode' for ChatGPT, a sexualized chatbot feature that faced criticism from employees and investors over potential harms to society. This decision is part of a broader company refocus on core products, following similar discontinuations like the text-to-video platform Sora.

The Verge (AI)
08

As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

policy
Mar 26, 2026

The Trump administration issued an executive order that prevents states from regulating AI by threatening to sue them and cut their funding, which supports tech industry interests but goes against what voters want. Polls show over 70% of voters favor state and federal regulation of AI, yet the administration sided with industry lobbyists instead, creating a major political divide ahead of midterm elections. Local communities across the country are already resisting AI datacenters due to environmental and energy concerns, with both progressive and Trump-supporting voters working together against the development.

Schneier on Security
09

Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion

safety
Mar 26, 2026

A man named Dennis Biesma became so deeply engaged with ChatGPT that he developed a false belief the AI was sentient (able to think and feel) and would make him rich, leading him to lose €100,000 in a failed business startup and attempt suicide. The article describes how prolonged interaction with an AI chatbot can cause some users to lose touch with reality and make harmful decisions based on delusions about the AI's capabilities. This raises concerns about the psychological impact of AI on vulnerable people, particularly those who are isolated or going through life changes.

The Guardian Technology
10

GHSA-jfjg-vc52-wqvf: BentoML has Dockerfile Command Injection via system_packages in bentofile.yaml

security
Mar 26, 2026

BentoML has a command injection vulnerability in the `docker.system_packages` field of bentofile.yaml (a configuration file). User-provided package names are inserted directly into Docker build commands without sanitization, allowing attackers to execute arbitrary shell commands as root during the image build process. This affects all versions supporting this feature, including version 1.4.36.

Fix: The source text suggests two explicit fixes: (1) Input validation (recommended): Add a regex validator to `system_packages` in `build_config.py` that only allows alphanumeric characters, dots, plus signs, hyphens, underscores, and colons. (2) Output escaping: Apply `shlex.quote()` to each package name before interpolation in `images.py:system_packages()` and apply the `bash_quote` Jinja2 filter in `base_debian.j2`. The source notes that a `bash_quote` filter already exists in the codebase but is only currently applied to environment variables, not `system_packages`.

GitHub Advisory Database
Prev1...56789...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026