aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 90/371
VIEW ALL
01

CVE-2026-34524: SillyTavern is a locally installed user interface that allows users to interact with text generation large language mode

security
Apr 2, 2026

SillyTavern is a locally installed interface for interacting with text generation AI models and related tools. Before version 1.17.0, it had a path traversal vulnerability (a flaw where an attacker can access files outside the intended directory) that allowed authenticated attackers to read and delete arbitrary files like secrets.json and settings.json by manipulating the avatar_url parameter.

Fix: This issue has been patched in version 1.17.0. Users should update to version 1.17.0 or later.

NVD/CVE Database
02

CVE-2026-34523: SillyTavern is a locally installed user interface that allows users to interact with text generation large language mode

security
Apr 2, 2026

SillyTavern is a locally installed interface for interacting with text generation models and AI tools. Before version 1.17.0, it had a path traversal vulnerability (a flaw that lets attackers access files outside the intended directory) that allowed unauthenticated users to check whether files exist anywhere on the server by sending specially encoded requests with "../" sequences to the file routes.

Fix: This issue has been patched in version 1.17.0.

NVD/CVE Database
03

CVE-2026-34522: SillyTavern is a locally installed user interface that allows users to interact with text generation large language mode

security
Apr 2, 2026

SillyTavern, a locally installed interface for interacting with AI text generation models, had a path traversal vulnerability (a flaw that lets attackers write files outside the intended directory) in its /api/chats/import feature prior to version 1.17.0. An authenticated attacker could exploit this by injecting traversal sequences into the character_name field to place malicious files outside the chats directory.

Fix: This issue has been patched in version 1.17.0. Users should upgrade to version 1.17.0 or later.

NVD/CVE Database
04

Critical Vulnerability in Claude Code Emerges Days After Source Leak

security
Apr 2, 2026

Anthropic's Claude Code source code was leaked, and shortly after, security researchers at Adversa AI discovered a critical vulnerability in the tool. The incident highlights how exposing source code can quickly lead to the discovery of serious security flaws.

SecurityWeek
05

OpenAI just bought TBPN

industry
Apr 2, 2026

OpenAI has acquired TBPN, a popular online talk show that broadcasts live weekday episodes and features interviews with AI executives and tech leaders, positioning itself as competition to traditional financial news channels like Bloomberg and CNBC. The show's host stated it will continue operating as before under OpenAI's ownership, marking a reunion between the host and OpenAI CEO Sam Altman, who had previously funded the host's company.

The Verge (AI)
06

Gemma 4: Byte for byte, the most capable open models

industry
Apr 2, 2026

Google DeepMind has released Gemma 4, a family of open-source AI models available in four sizes (2B to 31B parameters, where parameters are the trainable weights in a neural network) designed for complex reasoning and agentic workflows (AI systems that can autonomously plan and use tools to complete tasks). The models are optimized to run efficiently on various hardware from mobile phones to workstations and support advanced features like multimodal processing (handling text, images, video, and audio), function-calling for tool integration, and context windows up to 256K tokens (units of text the model can process in one response).

DeepMind Safety Research
07

Google Workspace’s continuous approach to mitigating indirect prompt injections

securitysafety
Apr 2, 2026

Indirect prompt injection (IPI) is a security threat where attackers hide malicious instructions in data or tools that an AI system uses, potentially influencing how it behaves without direct user input. Google treats IPI as an ongoing challenge rather than a one-time problem to solve, using multiple continuous strategies including human red-teaming (adversarial simulations), automated red-teaming (machine-learning-driven attack testing), a vulnerability rewards program for external researchers, and monitoring of publicly disclosed attacks to stay ahead of evolving threats.

Google Online Security Blog
08

Threat actor abuse of AI accelerates from tool to cyberattack surface

securityindustry
Apr 2, 2026

Threat actors are now embedding AI into their cyberattacks to make them more effective and precise, rather than just faster. AI is helping attackers craft better phishing emails (resulting in 54% click-through rates versus 12% traditionally), develop malware, and steal data more efficiently, while humans still oversee the operations. Organizations face a major security challenge because AI-enabled phishing is now far more targeted and harder to defend against at scale, especially when combined with systems designed to bypass multifactor authentication (MFA, a security method that requires multiple forms of verification).

Microsoft Security Blog
09

It’s not easy to get depression-detecting AI through the FDA

industrypolicy
Apr 2, 2026

Kintsugi, a California startup, spent seven years developing AI to detect depression and anxiety by analyzing how someone speaks rather than what they say. The company is shutting down after failing to get FDA (Food and Drug Administration, the U.S. agency that approves medical products) clearance, though it is releasing its technology as open-source software so others can use and build on it.

The Verge (AI)
10

Cybersecurity M&A Roundup: 38 Deals Announced in March 2026

industry
Apr 2, 2026

This article reports on 38 cybersecurity mergers and acquisitions (M&A, or business deals where one company buys another) announced in March 2026 by major companies including Airbus, Cellebrite, Databricks, Quantum eMotion, Rapid7, and OpenAI. The source provides only a high-level announcement of these deals without detailed technical or security content.

SecurityWeek
Prev1...8889909192...371Next