aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 82/371
VIEW ALL
01

Trent AI Emerges From Stealth With $13 Million in Funding

securityindustry
Apr 7, 2026

Trent AI, a new startup, has secured $13 million in funding to develop a layered security solution (a multi-level protective system) designed to protect AI agents (software programs that act autonomously to complete tasks) throughout their entire lifecycle, from creation to deployment.

SecurityWeek
02

[Webinar] How to Close Identity Gaps in 2026 Before AI Exploits Enterprise Risk

securitypolicy
Apr 7, 2026

Many enterprises have applications disconnected from centralized identity systems (systems that control who can access what), creating blind spots that AI agents and attackers are actively exploiting. While organizations have invested in IAM (identity and access management, the practice of controlling user access) and Zero Trust security, legacy apps and siloed systems remain outside of centralized control, allowing AI agents to amplify credential risks and bypass security oversight.

The Hacker News
03

CVE-2026-35487: text-generation-webui is an open-source web interface for running Large Language Models. Prior to 4.3, an unauthenticate

security
Apr 7, 2026

CVE-2026-35487 is a path traversal vulnerability (a flaw that lets attackers read files outside the intended directory) in text-generation-webui, an open-source tool for running large language models through a web interface. Before version 4.3, attackers could exploit the load_prompt() function without logging in to read any .txt file on the server and see its contents in the API response.

Fix: Update text-generation-webui to version 4.3 or later, where this vulnerability is fixed.

NVD/CVE Database
04

CVE-2026-35486: text-generation-webui is an open-source web interface for running Large Language Models. Prior to 4.3, he superbooga and

security
Apr 7, 2026

text-generation-webui, an open-source web interface for running Large Language Models, has a vulnerability in versions before 4.3 where the superbooga and superboogav2 RAG extensions (tools that fetch external documents to help answer questions) accept user-provided URLs without checking them for safety. This allows attackers to access cloud metadata endpoints (services that store sensitive credentials in cloud environments) and steal IAM credentials (identity and access management tokens that control what users can do). The vulnerability is fixed in version 4.3.

Fix: Update text-generation-webui to version 4.3 or later.

NVD/CVE Database
05

CVE-2026-35485: text-generation-webui is an open-source web interface for running Large Language Models. Prior to 4.3, an unauthenticate

security
Apr 7, 2026

text-generation-webui, an open-source web interface for running Large Language Models, has a path traversal vulnerability (a security flaw where an attacker can access files outside the intended directory) in versions before 4.3. An unauthenticated attacker can exploit this by sending specially crafted requests through the API to read any file on the server, because Gradio (the framework it uses) does not validate user input on the server side.

Fix: Update text-generation-webui to version 4.3 or later, where this vulnerability is fixed.

NVD/CVE Database
06

CVE-2026-35484: text-generation-webui is an open-source web interface for running Large Language Models. Prior to 4.3, an unauthenticate

security
Apr 7, 2026

CVE-2026-35484 is a path traversal vulnerability (a bug where an attacker can access files outside the intended folder) in text-generation-webui, an open-source tool for running large language models through a web interface. Before version 4.3, attackers could read any .yaml file (a configuration file format) on the server without needing to log in, potentially exposing sensitive data like passwords and API keys in the response.

Fix: This vulnerability is fixed in version 4.3. Users should update text-generation-webui to version 4.3 or later.

NVD/CVE Database
07

CVE-2026-35483: text-generation-webui is an open-source web interface for running Large Language Models. Prior to 4.3, an unauthenticate

security
Apr 7, 2026

CVE-2026-35483 is a path traversal vulnerability (a flaw that lets attackers read files outside intended directories) in text-generation-webui, an open-source tool for running large language models. Versions before 4.3 allow unauthenticated attackers to read files with extensions like .jinja, .jinja2, .yaml, or .yml from anywhere on the server.

Fix: Update to version 4.3 or later. The vulnerability is fixed in 4.3.

NVD/CVE Database
08

Human vs AI: Debates Shape RSAC 2026 Cybersecurity Trends

industry
Apr 7, 2026

At RSAC 2026, cybersecurity leaders discussed how AI should be used in security work, including debates about agentic applications (AI systems that can act independently to solve problems) and whether human involvement can realistically keep up as AI scales up. The discussions highlighted the tension between automating security tasks with AI and maintaining human oversight in important decisions.

Dark Reading
09

Enabling agent-first process redesign

industry
Apr 7, 2026

AI agents (autonomous systems that learn and adapt to execute workflows without constant human direction) work best when organizations redesign their processes around them rather than adding them to existing systems. Companies need to shift to an 'agent-first' model where AI agents handle routine operations while humans set goals and handle exceptions, requiring machine-readable process definitions and structured data flows to succeed.

MIT Technology Review
10

CVE-2026-33866: MLflow is vulnerable to an authorization bypass affecting the AJAX endpoint used to download saved model artifacts. Due

security
Apr 7, 2026

MLflow has a security flaw called an authorization bypass (a weakness where access controls are not properly checked) in its AJAX endpoint (a web interface used to download model files) that allows users without permission to download saved model artifacts they shouldn't be able to access. This affects MLflow versions up to 3.10.1 and has a CVSS score (a 0-10 rating of severity) of 5.3, considered medium severity.

NVD/CVE Database
Prev1...8081828384...371Next