aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 102/371
VIEW ALL
01

LangChain path traversal bug adds to input validation woes in AI pipelines

security
Mar 30, 2026

LangChain and LangGraph, popular AI frameworks that connect AI to business systems, have critical security flaws that allow attackers to steal sensitive data like API keys and files through improper input handling. The newest vulnerability is a path traversal bug (CVE-2026-34070, a CVSS 7.5 severity rating measuring how serious a flaw is) where attackers can read files by crafting malicious input, while two older flaws enable data theft through unsafe deserialization (treating untrusted data as safe) and SQL injection (manipulating database queries). The maintainers have released fixes that need to be applied immediately to prevent exploitation.

Fix: The source explicitly recommends the following mitigations: For path traversal, enforce allowlists for file access and restrict directory boundaries. For deserialization vulnerabilities, avoid unsafe deserialization methods and ensure only validated, expected data structures are processed. For SQL injection, use parameterized queries (pre-structured database requests that safely handle user input) and strengthen input sanitization. The source notes that fixes from the tools' maintainers are now available but must be applied immediately across integrations.

CSO Online
02

Leak reveals Anthropic’s ‘Mythos,’ a powerful AI model aimed at cybersecurity use cases

securityindustry
Mar 30, 2026

Anthropic's unreleased AI model, codenamed Mythos, was accidentally exposed through a configuration error in its content management system (CMS, software that organizes and stores digital content), revealing a more powerful LLM with advanced reasoning and coding abilities. The leak raises security concerns because the model's improved skills at finding and exploiting software vulnerabilities could make cyberattacks easier while also helping defenders, and its capability for recursive self-fixing (autonomously identifying and patching its own code problems) narrows the gap between human and AI-level hacking. Anthropic plans a phased rollout targeting enterprise security teams first before broader release.

CSO Online
03

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

security
Mar 30, 2026

MLflow has a command injection vulnerability (a type of attack where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The vulnerability occurs because MLflow reads dependency information from a file called `python_env.yaml` in the model artifact and directly uses it in a shell command without checking if it's safe, allowing an attacker to execute arbitrary commands on the system deploying the model.

Fix: Update MLflow to version 3.8.2, which fixes the vulnerability. Version 3.8.0 is affected.

NVD/CVE Database
04

Mistral secures $830 million in debt financing to fund AI data center

industry
Mar 30, 2026

Mistral, a French AI startup, secured $830 million in debt financing to build a data center powered by thousands of Nvidia graphics processing units (GPUs, specialized chips used for AI training). The new data center near Paris will support training of Mistral's large language models (LLMs, AI systems trained on vast amounts of text) and will become operational in the second quarter of 2025, with plans to expand European computing capacity to 200 MW by the end of 2027.

CNBC Technology
05

CVE-2025-15036: A path traversal vulnerability exists in the `extract_archive_to_dir` function within the `mlflow/pyfunc/dbconnect_artif

security
Mar 29, 2026

A path traversal vulnerability (a security flaw where an attacker uses special path names like '../' to access files outside intended directories) exists in MLflow's archive extraction function that doesn't validate the contents of tar.gz files before extracting them. An attacker who controls the tar.gz file can overwrite arbitrary files or escape sandbox restrictions (isolated environments that limit what code can access) in shared computing environments.

Fix: Update to mlflow version v3.7.0 or later.

NVD/CVE Database
06

All the latest in AI ‘music’

industrypolicy
Mar 29, 2026

AI is now being used throughout the music industry for tasks like creating songs, building playlists, and detecting AI-generated content, but this raises major concerns about copyright (legal ownership of creative work), whether AI outputs are truly art, and whether AI-generated music will flood the market and harm human musicians. The music industry is divided, with some platforms like Apple Music and Deezer adding labels to identify AI music, while others like Bandcamp have banned AI content entirely, and major record labels are pursuing lawsuits against AI music companies.

The Verge (AI)
07

Helping disaster response teams turn AI into action across Asia

industry
Mar 29, 2026

OpenAI and partner organizations held an 'AI Jam' workshop in Bangkok with 50 disaster management leaders from 13 Asian countries to explore practical ways AI can improve emergency response. The workshop focused on building custom GPTs (generalized pre-trained transformer models, or AI tools trained on broad data) and workflows for tasks like situation reporting and needs assessment, addressing how disaster response teams in resource-constrained environments with fragmented data can work faster and more effectively.

OpenAI Blog
08

Bluesky’s new app is an AI for customizing your feed

industry
Mar 29, 2026

Bluesky has released Attie, a new AI assistant powered by Claude (Anthropic's language model) that helps users create custom feeds using natural language instructions instead of traditional algorithmic settings. Users can describe what content they want to see, like 'posts about folklore, mythology, and traditional music, especially Celtic traditions,' and Attie builds a personalized feed based on that description, with plans to integrate it into Bluesky and other apps built on the AT Protocol (Bluesky's underlying technical foundation).

The Verge (AI)
09

CVE-2026-5002: A vulnerability has been found in PromtEngineer localGPT up to 4d41c7d1713b16b216d8e062e51a5dd88b20b054. The impacted el

security
Mar 28, 2026

A vulnerability (CVE-2026-5002) was discovered in PromtEngineer localGPT that allows injection attacks (inserting malicious code into input) through the LLM Prompt Handler component in the backend/server.py file. An attacker can exploit this vulnerability remotely, and the exploit code has been publicly released. The vendor has not responded to disclosure attempts, and because the product uses rolling releases (continuous updates without traditional version numbers), specific patch information is unavailable.

NVD/CVE Database
10

TikTok’s policy for AI ads isn’t working

policysafety
Mar 28, 2026

Companies like Samsung are posting ads on TikTok that appear to be made with generative AI (AI systems that create images or videos from text descriptions), but they're not adding the required AI disclosure labels that TikTok's advertising policies demand. This means users can't easily tell whether the ads they see are AI-generated or made by humans, even though the companies creating them know the truth.

The Verge (AI)
Prev1...100101102103104...371Next