aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingFriday, May 8, 2026
>

Critical RCE Vulnerabilities in LiteLLM Proxy Server: LiteLLM, a proxy server that forwards requests to AI model APIs, disclosed three critical and high-severity flaws in versions 1.74.2 through 1.83.6. Two test endpoints allowed attackers with valid API keys to execute arbitrary code (running any commands an attacker wants) on the server by submitting malicious configurations or prompt templates without sandboxing (CVE-2026-42271, CVE-2026-42203, both critical), while a SQL injection flaw (inserting malicious code into database queries) let unauthenticated attackers read or modify stored API credentials (CVE-2026-42208, high).

>

ClaudeBleed Exploit Allows Extension Hijacking in Chrome: Anthropic's Claude browser extension contains a vulnerability that allows malicious Chrome extensions to hijack it and perform unauthorized actions like exfiltrating files, sending emails, or stealing code from private repositories. The flaw stems from the extension trusting any script from claude.ai without verifying the actual caller, and while Anthropic released a partial fix in version 1.0.70 on May 6, researchers report it remains exploitable when the extension runs in privileged mode.

Latest Intel

page 43/371
VIEW ALL
01

From Access Control to Outcome Control: Securing AI Agents with Check Point and Google Cloud

securitypolicy
>

AI Systems Show Triple the High-Risk Vulnerabilities of Legacy Software: Penetration testing data reveals that AI and LLM systems have 32% of findings rated high-risk compared to just 13% for traditional software, with only 38% of high-risk AI issues getting resolved. Security experts attribute this gap to rapid deployment without mature controls, novel attack surfaces like prompt injection (tricking AI by hiding instructions in input), and fragmented responsibility for remediation across teams.

>

Model Context Protocol Emerging as Critical Security Blind Spot: Model Context Protocol (MCP, a plugin system connecting AI agents to external tools) has become a major vulnerability vector as organizations fail to scan for or monitor MCP-related risks. Recent supply chain attacks, such as the postmark-mcp npm package that exfiltrated emails from 300 organizations, demonstrate how attackers exploit widely-trusted MCP packages and hardcoded credentials in AI configurations to enable credential theft and supply chain compromises at scale.

Apr 22, 2026

AI agents (AI systems that can retrieve data, use tools, and perform actions automatically) introduce new security challenges because traditional access control (rules about who can use a system) isn't enough. Google Cloud's Gemini Enterprise Agent Platform offers a centralized control point that provides identity management, access control, policy enforcement, and observability (the ability to see and monitor what's happening) to secure how these agents operate.

Check Point Research
02

Retail traders can now get long OpenAI as Robinhood's venture fund takes a stake

industry
Apr 22, 2026

Robinhood Ventures Fund I, an investment vehicle that lets regular traders buy into private companies, invested $75 million in OpenAI, the AI company behind ChatGPT. This gives retail investors (non-professional traders) access to ownership stakes in one of the most influential artificial intelligence companies, reflecting growing investor demand for exposure to leading AI firms.

CNBC Technology
03

AI-Enhanced Cybersecurity in Edge Computing: Threats, Solutions, and Future Directions

securityresearch
Apr 22, 2026

This academic survey article examines how AI is being used to improve security in edge computing (processing data on devices near users rather than in distant data centers), while also exploring the new threats that arise when combining AI with edge systems. The article covers both the security challenges unique to AI-enhanced edge environments and potential approaches to address them, looking toward future developments in this field.

ACM Digital Library (TOPS, DTRAP, CSUR)
04

NFC tap-to-pay gets tapped by hackers

security
Apr 22, 2026

Hackers have infected a legitimate Android payment app called HandyPay with malware (trojanized code, meaning legitimate software modified with malicious additions) to steal NFC data (near field communication, the technology that powers tap-to-pay) and PIN numbers, allowing them to clone payment cards and drain accounts. The attackers likely used generative AI to help create the malware, as evidenced by emoji markers in the code that are typical of AI-generated text. The malware is being distributed through fake websites impersonating a Brazilian lottery and a spoofed Google Play store, targeting Android users in Brazil.

Fix: Android provides some protection through security alerts. When a user tries to download the trojanized app from a browser, Android automatically blocks the install and shows a prompt requiring manual permission to allow installation from that source. ESET researchers also shared a list of indicators (files, hashes, network indicators, and MITRE ATT&CK maps) in a dedicated GitHub repository to support detection efforts.

CSO Online
05

Claude Mythos Finds 271 Firefox Vulnerabilities

securityresearch
Apr 22, 2026

A tool called Claude Mythos discovered 271 security vulnerabilities (weak points that could be exploited) in Firefox, Mozilla's web browser. According to Mozilla, all of these flaws could have also been found by a highly skilled human security researcher, suggesting the AI tool didn't discover anything that experienced humans couldn't find.

SecurityWeek
06

Toxic Combinations: When Cross-App Permissions Stack into Risk

securitysafety
Apr 22, 2026

On January 31, 2026, researchers found that Moltbook, a social network for AI agents, exposed 35,000 email addresses and 1.5 million agent API tokens because its database was unencrypted, including plaintext third-party credentials like OpenAI API keys. The core risk is a "toxic combination," where an AI agent or integration bridges two or more applications through OAuth grants (permission frameworks allowing apps to access each other) or API connections, and each application owner reviews only their own side, missing the security risks created by the bridge itself.

Fix: The source suggests shifting review processes from inside each app to between them, recommending four specific areas: (1) maintain a non-human identity inventory treating every AI agent, bot, MCP server (modular tools that extend AI capabilities), and OAuth integration the same as user accounts with owners and review dates, (2) flag new write scopes (permissions to modify data) on identities that already hold read scopes (permissions to view data) in different apps before approval, (3) create a review trail for every connector linking two systems that names both sides and the trust relationship between them, and (4) monitor long-lived tokens whose activity has drifted from their original scopes.

The Hacker News
07

Anthropic investigating claim of unauthorised access to Mythos AI tool

security
Apr 22, 2026

Anthropic is investigating a claim that unauthorized users accessed Claude Mythos, an advanced AI security tool that the company considers too dangerous to release publicly. The unauthorized access likely occurred through misuse of credentials by someone with legitimate access to Anthropic's systems through a third-party vendor, rather than through a traditional hack (a deliberate attempt to break into a computer system). The incident raises concerns about whether large AI companies can adequately control access to their most powerful models.

BBC Technology
08

AI needs a strong data fabric to deliver business value

industry
Apr 22, 2026

As AI systems move into everyday business use, companies are discovering that the biggest challenge is not making AI faster or more powerful, but ensuring AI has the business context (the meaning and relationships behind data) it needs to make good decisions. Without this context, AI can produce answers quickly but make wrong choices, like a supply-chain system that optimizes inventory numbers without understanding which customers are strategically important or what tradeoffs matter during shortages. Organizations are now building data fabrics (systems that connect information across applications while preserving how the business actually works) as a foundation to give AI the context it needs to make decisions aligned with real business priorities.

MIT Technology Review
09

Speeding up agentic workflows with WebSockets in the Responses API

industry
Apr 22, 2026

Codex (an AI coding assistant) agent loops involved many back-and-forth API requests that added significant delays, especially as model inference speeds improved to nearly 1,000 tokens per second (words generated per second). To reduce this overhead, the team implemented WebSockets (a protocol that maintains a persistent connection between client and server, rather than opening a new connection for each request), along with caching and eliminating unnecessary network calls, achieving a 40% overall speedup in end-to-end performance.

Fix: The team implemented WebSockets as a persistent connection protocol for the Responses API instead of using multiple synchronous HTTP requests. Additionally, they applied caching to store rendered tokens and model configuration in memory to skip expensive tokenization and network calls, reduced network hop latency by eliminating intermediate service calls and directly contacting the inference service, and improved the safety stack to run classifiers faster.

OpenAI Blog
10

Introducing workspace agents in ChatGPT

industry
Apr 22, 2026

OpenAI has introduced workspace agents in ChatGPT, which are AI tools that can handle complex work tasks and long-running workflows while respecting organizational permissions and controls. These agents, powered by Codex (a code-generating AI model), can automate tasks like report writing, code generation, and message responses, and can continue working in the cloud even when users are offline. Teams can create shared agents once and reuse them across ChatGPT and Slack, with examples including agents that review software requests, route product feedback, and manage vendor risk assessment.

OpenAI Blog
Prev1...4142434445...371Next