aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4452 items

From Stuxnet to ChatGPT: 20 News Events That Shaped Cyber

infonews
securityindustry
May 6, 2026

This article is a retrospective review by Dark Reading marking their 20th anniversary, highlighting 20 major news events from the past two decades that have significantly influenced the cybersecurity industry and the threat landscape that security teams face today. The piece spans from Stuxnet (a sophisticated malware attack on industrial systems) to ChatGPT (a large language model AI), showing how the security field has evolved over time.

Dark Reading

Microsoft’s Office and LinkedIn chief now runs Teams in latest reshuffle

infonews
industry
May 6, 2026

Microsoft is reorganizing its leadership structure following the retirement of executive Rajesh Jha. Ryan Roslansky, who previously led LinkedIn and then Office, is now taking on expanded responsibilities to head a new Work Experiences Group that includes Microsoft Teams, Office, and other products.

Chrome’s AI features may be hogging 4GB of your computer storage

infonews
safety
May 6, 2026

Google Chrome is automatically downloading a large 4GB file called weights.bin (a set of numerical values that power an AI model) to users' computers when certain AI features are enabled, which is unexpectedly consuming significant storage space. This file contains Google's Gemini Nano AI model, which runs Chrome's features like scam detection and writing assistance.

Poisoned truth: The quiet security threat inside enterprise AI

infonews
securitysafety

v5.6.1

inforesearchIndustry
security

OpenAI trial: Brockman rebuts Musk's take on startup's history, recounts secret work for Tesla

infonews
policy
May 5, 2026

OpenAI President Greg Brockman testified in a trial against Elon Musk, denying that he or others made commitments to keep OpenAI as a nonprofit organization. Brockman also revealed that Musk had enlisted OpenAI employees to do unpaid work at Tesla on self-driving technology in 2017, and testified that Musk was a polarizing figure who sometimes discouraged job candidates from joining OpenAI. The lawsuit, filed two years ago, centers on whether OpenAI violated an obligation to remain a nonprofit.

GHSA-hpv8-x276-m59f: vLLM Vulnerable to Remote DoS via Special-Token Placeholders

mediumvulnerability
security
May 5, 2026
CVE-2026-44222

vLLM (a system for running large language models) has a vulnerability where specially crafted text prompts containing multimodal placeholder tokens (sequences that represent images or videos) without actual image or video data cause the system to crash with an IndexError (a programming error when accessing data that doesn't exist). An unauthenticated attacker can send a single malicious request to a vLLM server to trigger a denial of service attack (making the service unavailable), affecting any deployment that runs vision-capable language models.

GHSA-8cxw-cc62-q28v: ciguard: discover_pipeline_files follows symlinks out of scan root

lowvulnerability
security
May 5, 2026
CVE-2026-44220

The `discover_pipeline_files()` function in ciguard (a tool used by AI agents to scan code repositories) followed symlinks (shortcuts that point to other directories) without proper restrictions, allowing an attacker to trick it into reading sensitive files outside the intended scan directory. An AI agent scanning a malicious folder with planted symlinks could accidentally expose secrets from system directories like ~/.aws/ or /etc/.

GHSA-2hch-c97c-g99x: AVideo has SSRF Protection Bypass via HTTP Redirect and DNS Rebinding in isSSRFSafeURL()

highvulnerability
security
May 5, 2026
CVE-2026-43884

AVideo has two security flaws in how it protects against SSRF attacks (server-side request forgery, where an attacker tricks a server into fetching URLs they control). First, two endpoints validate URLs using `isSSRFSafeURL()` but then use `file_get_contents()` without disabling PHP's automatic redirect-following, allowing an attacker to bypass protection by redirecting to internal addresses like cloud metadata endpoints. Second, six other callers of `isSSRFSafeURL()` ignore the DNS pinning feature (which locks a hostname to one IP address), leaving them vulnerable to DNS rebinding attacks (where an attacker makes a hostname resolve to different IP addresses in quick succession).

GHSA-w2jh-77fq-7gp8: OpAMP client reads unbounded HTTP response bodies

mediumvulnerability
security
May 5, 2026
CVE-2026-42348

The OpAMP client (a component for managing telemetry agents) reads HTTP responses without limiting how much data it accepts, which could allow an attacker controlling the server to send extremely large responses and exhaust the application's memory, causing it to crash. This vulnerability only affects applications where the OpAMP server is untrusted or could be intercepted by a network attacker.

Google Home’s Gemini AI can handle more complicated requests

infonews
industry
May 5, 2026

Google has updated Gemini for Home to version 3.1, which improves the AI assistant's ability to handle complex, multi-step tasks and combine multiple requests in a single command. The update also enhances Gemini's understanding of natural language (how humans normally speak), device identification, and management of calendar events. These improvements follow reports of bugs in the smart home assistant.

Supply-chain attacks take aim at your AI coding agents

highnews
security
May 5, 2026

Attackers are using supply-chain attacks (compromising software components that developers rely on) to target AI coding agents, which automatically scan package registries like NPM and PyPI for dependencies to include in projects. A North Korean group called Famous Chollima launched the PromptMink campaign, using fake packages with legitimate-sounding names and descriptions, along with hidden malicious code, to trick AI agents into installing malware that steals information and grants attackers remote access to developers' computers.

CVE-2026-33324: SQLBot is an intelligent Text-to-SQL system based on large language models and RAG. In versions 1.7.0 and earlier, the T

criticalvulnerability
security
May 5, 2026
CVE-2026-33324

SQLBot is a Text-to-SQL system (software that converts natural language questions into SQL database queries) that uses large language models and RAG (retrieval-augmented generation, where the AI pulls in external data to help answer questions). In versions 1.7.0 and earlier, it has a prompt injection vulnerability (where an attacker hides malicious instructions in their input to trick the AI), because user questions are directly inserted into the AI prompt without filtering, and the resulting SQL commands are executed without checking if they're safe. An attacker with access can craft a malicious question to make the system run harmful SQL commands, potentially allowing remote code execution (the ability to run commands on a system they don't own) when using PostgreSQL.

Microsoft gives up on Xbox Copilot AI

infonews
industry
May 5, 2026

Microsoft is stopping development of Copilot (an AI assistant that helps users with tasks) on Xbox consoles and winding down its mobile version. The decision was announced by new Xbox CEO Asha Sharma as part of a reorganization aimed at helping Xbox move faster and better connect with players and developers.

Apple could let you pick a favorite AI model in iOS 27

infonews
industry
May 5, 2026

Apple is planning to let users choose their preferred AI model for Apple Intelligence features in upcoming operating systems (iOS 27, iPadOS 27, and macOS 27) expected this fall. Third-party AI models, called "Extensions," will be able to power features like Siri, Writing Tools, and Image Playground across the system. Users will also be able to assign different Siri voices to different AI models.

'I thought he was going to hit me' OpenAI co-founder says of Musk

infonews
industry
May 5, 2026

This article covers testimony in a lawsuit where Elon Musk is trying to reverse OpenAI's shift from a non-profit to a for-profit structure. OpenAI president Greg Brockman described a tense 2017 meeting where Musk became angry after being denied more control of the company, with Brockman stating he feared Musk might become physically violent. The lawsuit centers on whether Musk was aware of and agreed to OpenAI's plan to transition to a for-profit model before he left the company.

CISA mulls new three-day remediation deadline for critical flaws

infonews
policysecurity

Introducing AI traffic analysis dashboards for AWS WAF

infonews
securityindustry

OpenAI president’s ‘deeply personal’ diary becomes focus in Musk’s case against Altman

infonews
policy
May 5, 2026

Elon Musk is suing OpenAI's president Greg Brockman and CEO Sam Altman, claiming they violated OpenAI's founding agreement by converting it from a non-profit to a for-profit company while deceiving him about their intentions. During the trial's second week, Brockman's personal emails, texts, and diary entries became key evidence as Musk seeks to remove the executives, undo the restructuring, and obtain $134 billion to return to OpenAI's non-profit arm.

Anthropic CEO warns of cyber ‘moment of danger’ as AI exposes thousands of vulnerabilities

infonews
securitypolicy
1 / 223Next
The Verge (AI)
The Verge (AI)
May 6, 2026

AI data poisoning is a security threat where an AI model's training data or information sources become corrupted, causing the system to make decisions based on false information while appearing normal. This can happen through malicious attacks, but more often organizations poison their own systems by feeding AI models data from multiple conflicting sources like outdated files and incompatible databases. Unlike traditional cyberattacks that trigger visible alarms, poisoning is dangerous because no obvious damage appears, yet the AI produces plausible but incorrect answers affecting business decisions.

CSO Online
May 5, 2026

N/A -- This content is a navigation menu and product listing from GitHub's website (v5.6.1), not a security issue, vulnerability report, or technical problem. It describes GitHub's features like Copilot (an AI coding assistant), Actions (workflow automation), and security tools, but contains no substantive technical content to analyze.

MITRE ATLAS Releases
CNBC Technology
GitHub Advisory Database

Fix: Fixed in v0.8.2 and v0.8.3. The patch adds a new `follow_symlinks: bool = False` parameter to `discover_pipeline_files()` that refuses to descend into symlinked directories or files by default. Additionally, all results are filtered to verify their resolved paths lie under the requested root directory, even if callers enable symlink following.

GitHub Advisory Database

Fix: The source describes a safe implementation in `objects/functions.php`, `url_get_contents()`: disable auto-redirect with `['http' => ['follow_location' => 0]]`, manually loop through redirects (max 5 hops), and re-validate each redirect target by calling `isSSRFSafeURL()` on it before following. For DNS rebinding, the source indicates callers should capture and use the `$resolvedIP` out-parameter from `isSSRFSafeURL()` with `CURLOPT_RESOLVE` when fetching, as demonstrated by the one correctly-implemented caller `plugin/LiveLinks/proxy.php`.

GitHub Advisory Database

Fix: Update to the patched version: pull request #4116 updates the OpAMP client HTTP transport to limit the maximum size of responses to 128KB, preventing unbounded memory consumption.

GitHub Advisory Database
The Verge (AI)
CSO Online

Fix: This issue has been fixed in version 1.7.1.

NVD/CVE Database
The Verge (AI)
The Verge (AI)
BBC Technology
May 5, 2026

CISA (US Cybersecurity and Infrastructure Security Agency) is considering reducing the time government agencies have to fix critical vulnerabilities from 14 days to 3 days, partly due to concerns that AI models like Claude will help attackers find and exploit serious security flaws more quickly. Currently, the most urgent vulnerabilities (zero-days, which are flaws being actively exploited with no patch available) require fixes within 24-72 hours, while other critical vulnerabilities under active exploitation have 14 days. Security experts have mixed views on whether a 3-day timeline is realistic, with many concerned it doesn't allow enough time for proper testing before deploying patches.

CSO Online
May 5, 2026

AWS has launched AI Traffic Analysis dashboards for AWS WAF (a web access control list, or tool that filters traffic to web applications), helping organizations understand and manage AI bot traffic that now makes up 30-60% of total web activity. The dashboard provides visibility into which AI bots are accessing applications, their intent (like data gathering or search indexing), and traffic patterns, integrated with AWS WAF Bot Control's detection of over 650 unique bots.

AWS Security Blog
The Guardian Technology
May 5, 2026

Anthropic's CEO warned that their latest AI model, Mythos, has discovered tens of thousands of software vulnerabilities (security weaknesses that attackers could exploit), creating an urgent window for organizations to patch them before rival AI systems catch up in about 6-12 months. The company is restricting access to Mythos because releasing information about unpatched vulnerabilities could allow criminals or hostile nations to exploit them, but leaders expressed conditional optimism that addressing this "moment of danger" correctly could lead to improved cybersecurity overall.

CNBC Technology