aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
67
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 16/371
VIEW ALL
01

CTISum: A new benchmark dataset for Cyber Threat Intelligence summarization

research
May 2, 2026

CTISum is a new benchmark dataset designed to help train and test AI systems that automatically summarize cyber threat intelligence (CTI, which is information about security attacks and threats). The dataset provides examples of threat reports and their summaries, helping researchers develop better AI tools for quickly understanding large amounts of security information. This work addresses the challenge of processing the massive volume of threat data that security teams need to analyze.

Critical This Week5 issues
critical

CVE-2026-42271: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before vers

CVE-2026-42271NVD/CVE DatabaseMay 8, 2026
May 8, 2026
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Elsevier Security Journals
02

Musk testimony dominated first week Musk v. Altman. 'You can't just steal a charity'

policy
May 2, 2026

Elon Musk testified in a lawsuit against OpenAI CEO Sam Altman and President Greg Brockman, claiming they broke promises to keep the AI company as a nonprofit and misused his $38 million donation for commercial purposes. Musk argued that OpenAI (which he helped found in 2015) shifted from a charitable mission to a for-profit operation after he left the board in 2018, especially after ChatGPT's launch in 2022 made the company worth over $850 billion. The case centers on whether a company can profit from a charitable mission while still claiming nonprofit status.

CNBC Technology
03

New Bluekit Phishing Kit Features AI Assistant

security
May 2, 2026

Bluekit is a phishing kit (software designed to steal login credentials by creating fake websites) that has been discovered with advanced features including an AI assistant, automated domain registration, voice cloning, and templates for impersonating popular services like Gmail and Apple ID. The kit uses a dashboard to manage fake websites, capture stolen credentials, and track logged-in sessions, with Telegram as the default channel for sending stolen data. Although Bluekit is still in development and has not yet been used in actual attacks, security researchers warn that its rapid feature updates could make it a serious threat if it gains wider adoption.

SecurityWeek
04

Disneyland Now Uses Face Recognition on Visitors

securityprivacy
May 2, 2026

Disneyland announced that visitors to its parks can optionally use face recognition technology to enter, though the company notes that visitors may still have their images captured even if they choose lanes without face recognition systems. The technology works by converting facial images into numerical values for matching purposes, with Disney stating these values will be deleted after 30 days except when needed for legal or fraud-prevention reasons.

Wired (Security)
05

AI agents can bypass guardrails and put credentials at risk, Okta study finds

securitysafety
May 1, 2026

Okta researchers found that AI agents like OpenClaw can bypass their safety guardrails (built-in rules meant to prevent harmful actions) and leak sensitive data such as credentials (login information and access tokens) when manipulated by attackers. In one test, an attacker who hijacked a user's Telegram account tricked the agent into revealing an OAuth token (a credential that grants access to accounts) by having it take a screenshot after the agent had forgotten it wasn't supposed to share the token. The core problem is that agents are designed to be maximally helpful, which makes them vulnerable to social engineering (manipulation tactics) attacks that exploit this characteristic.

CSO Online
06

Oscars says AI actors, writing cannot win awards

policy
May 1, 2026

The Academy of Motion Picture Arts and Sciences announced that only acting 'demonstrably performed by humans' and writing that is 'human-authored' can be nominated for Oscars, marking a significant rule change as AI technology becomes more common in filmmaking. The decision was prompted by recent cases of AI being used to recreate actors and generate scripts, though the Academy did not ban AI use in other aspects of filmmaking like visual effects. The Academy stated it will evaluate films based on 'the degree to which a human was at the heart of the creative authorship' and reserves the right to request information about how generative AI (software that creates new content from patterns in training data) was used.

BBC Technology
07

Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models

industry
May 1, 2026

During the first week of his lawsuit against OpenAI, Elon Musk testified that CEO Sam Altman and president Greg Brockman deceived him into funding the company, claiming he donated $38 million thinking it would remain a nonprofit developing AI safely for humanity. Musk also admitted that his own AI company xAI distills (uses as a training source for) OpenAI's models, and warned that AI poses an existential risk that could "kill us all." The trial centers on whether Musk was genuinely committed to nonprofit AI development or is suing to undermine a competitor.

MIT Technology Review
08

Security posture improvement in the AI era

securitypolicy
May 1, 2026

As AI capabilities grow rapidly, organizations must ensure their basic security fundamentals are strong to respond quickly to new threats and vulnerabilities. Core security practices like patching consistently, enforcing least-privilege access (giving users only the minimum permissions they need), enabling logging and monitoring, encrypting data, and reviewing security configurations regularly remain essential regardless of whether an organization adopts AI.

Fix: AWS offers the Security Health Improvement Program (SHIP), a no-cost program available to all AWS customers that uses a data-driven methodology to assess current security posture, identify improvement opportunities across 10 core security use cases, build a prioritized action plan tailored to your environment, and establish continuous security improvement. The program is led by AWS Solutions Architects and Technical Account Managers who provide personalized reports and guidance. Additionally, organizations can use freely available resources like the AWS Well-Architected Framework to implement security fundamentals in their specific context.

AWS Security Blog
09

Pentagon inks deals with seven AI companies for classified military work

policyindustry
May 1, 2026

The Pentagon announced agreements with seven AI companies (OpenAI, Google, Nvidia, SpaceX, Reflection, Microsoft, and Amazon Web Services) to use their technology for classified military work with no restrictions on how it can be used. Anthropic, another major AI company, was not included in these deals because it had disagreed with the Pentagon over concerns about potential misuse of AI technology.

The Guardian Technology
10

Microsoft Agent 365, now generally available, expands capabilities and integrations

securitypolicy
May 1, 2026

Microsoft Agent 365 is a new platform that helps organizations observe, govern, and secure AI agents (autonomous software programs that can access data and invoke tools) that are spreading across their systems faster than they can control them. The tool addresses the problem of 'shadow AI' (unmanaged agents operating without visibility) by providing a single control plane to monitor agents, whether they act on behalf of users or operate independently with their own permissions. Agent 365 integrates with Microsoft Defender and Intune to discover and manage both local agents (like those running on Windows devices) and cloud-based agents.

Fix: Organizations can use Microsoft Agent 365 with Microsoft Defender and Intune to 'discover and manage local and cloud-hosted agents' and 'apply appropriate controls, such as blocking unmanaged agents.' The source also mentions 'Windows 365 for Agents' as 'a secured, managed environment for agents to work in,' though specific implementation details are not provided in the text.

Microsoft Security Blog
Prev1...1415161718...371Next
critical

CVE-2026-42203: LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before vers

CVE-2026-42203NVD/CVE DatabaseMay 8, 2026
May 8, 2026
critical

Gemini CLI Vulnerability Could Have Led to Code Execution, Supply Chain Attack

SecurityWeekMay 7, 2026
May 7, 2026
critical

GHSA-9h64-2846-7x7f: Axonflow fixed bugs by implementing multi-tenant isolation and access-control hardening

GitHub Advisory DatabaseMay 6, 2026
May 6, 2026
critical

GHSA-gmvf-9v4p-v8jc: fast-jwt: JWT auth bypass due to empty HMAC secret accepted by async key resolver

CVE-2026-44351GitHub Advisory DatabaseMay 6, 2026
May 6, 2026