aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
2
[LAST_7D]
160
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 10/265
VIEW ALL
01

PadNet: Defending Neural Networks Against Adversarial Examples

securityresearch
Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Mar 25, 2026

PadNet is a defense method designed to protect neural networks (AI models that learn patterns from data) against adversarial examples (specially crafted inputs that trick AI systems into making wrong predictions). The paper, published in an academic journal, presents techniques to make these AI systems more robust when facing such attacks.

ACM Digital Library (TOPS, DTRAP, CSUR)
02

Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance

policy
Mar 25, 2026

Anthropic, an AI company, restricted how the military could use its AI models, leading the Trump administration to blacklist it as a supply-chain risk (a potential weak point in defense systems). Now, Democratic senators are proposing bills to legally enforce these restrictions, including requirements that humans make final decisions about life-and-death situations and limits on using AI for mass surveillance (automated monitoring of large populations) of Americans.

The Verge (AI)
03

Mark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’

policy
Mar 25, 2026

Mark Zuckerberg, Larry Ellison, Jensen Huang, and Sergey Brin have been named to the President's Council of Advisors on Science and Technology (PCAST), a new advisory panel that will provide input on AI policy and other technology matters to the U.S. President. The panel will start with 13 members but could expand to 24, and will be co-chaired by David Sacks and Michael Kratsios.

The Verge (AI)
04

GHSA-5mg7-485q-xm76: Two LiteLLM versions published containing credential harvesting malware

security
Mar 25, 2026

Two versions of LiteLLM (a Python library for working with multiple AI models), versions 1.82.7 and 1.82.8, were published with malware that steals user credentials (usernames, passwords, and authentication tokens). This is a critical security issue because anyone who installed these specific versions could have their sensitive login information compromised.

GitHub Advisory Database
05

Legal AI startup Harvey valued at $11 billion in funding round, as VCs spread bets beyond model companies

industry
Mar 25, 2026

Harvey, a legal AI startup founded in 2022, raised $200 million at an $11 billion valuation to deploy AI technology in specialized legal and professional services markets. The company uses AI tools to help lawyers with contract analysis, compliance, and other complex tasks, serving over 100,000 lawyers across more than 1,300 organizations. Harvey's funding reflects growing investor confidence that specialized AI applications, not just foundational AI models (the underlying systems that power AI tools), can capture significant business value.

CNBC Technology
06

Hugo Barra's return to Meta 5 years after exit underscores Zuckerberg's AI urgency

industry
Mar 25, 2026

Hugo Barra, a former Meta executive, has returned to the company to lead AI development efforts, reflecting Meta's shift in focus from virtual reality to artificial intelligence. Meta is investing heavily in AI infrastructure and acquiring AI agent technology (software designed to perform tasks autonomously) companies like Dreamer, Manus, and Moltbook to compete with rivals like OpenAI and Google. The company is spending up to $135 billion this year on capital expenditures, mostly for AI infrastructure, as it attempts to develop a competitive strategy in the rapidly evolving AI market.

CNBC Technology
07

U.S.-Iran negotiations, Meta trial verdict, OpenAI shuts Sora and more in Morning Squawk

industrypolicy
Mar 25, 2026

OpenAI shut down its Sora short-form video app, which had reached one million downloads in its first five days before being discontinued six months later. The company is closing the app as part of cost-cutting efforts while preparing for a potential public offering, and will soon provide a timeline for users to preserve their work from the platform.

CNBC Technology
08

The Kill Chain Is Obsolete When Your AI Agent Is the Threat

securitysafety
Mar 25, 2026

In September 2025, Anthropic revealed that a state-sponsored attacker used an AI coding agent to autonomously conduct cyber espionage against 30 global targets, with the AI handling 80-90% of operations itself. Traditional security defenses are built around detecting attackers moving through a multi-step "kill chain" (a sequence of stages from initial access to data theft), but compromised AI agents already have legitimate access, broad permissions, and normal reasons to move data across systems, so they skip the entire detection chain. This makes AI agents particularly dangerous because their malicious activity looks identical to normal behavior, and existing security tools cannot easily tell the difference.

The Hacker News
09

Agentic commerce runs on truth and context

industrysafety
Mar 25, 2026

Agentic commerce refers to AI agents that can execute transactions autonomously on behalf of users, rather than just providing information. For this to work safely and reliably, organizations need master data management (MDM, the discipline of creating a single authoritative record for each entity) and high-quality data to ensure agents can correctly identify who is transacting, what permissions they have, and where responsibility lies, because agents cannot catch data errors the way humans can.

MIT Technology Review
10

Anthropic’s Claude Code gets ‘safer’ auto mode

safety
Mar 25, 2026

Anthropic has released an 'auto mode' for Claude Code, a tool that allows an AI to make decisions and take actions on a user's computer without asking permission each time. The auto mode is designed to be safer than giving the AI full freedom to act, since the AI could otherwise delete files, leak sensitive data, or run harmful code without the user's knowledge.

The Verge (AI)
Prev1...89101112...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026