aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4452 items

AI music is flooding streaming services — but who wants it?

infonews
industry
May 3, 2026

Generative AI (software that creates new content based on patterns in training data) is being used to create music and flood streaming services, starting as experimental projects in 2018-2019 with tools like Google's Magenta. The article explores whether audiences actually want AI-generated music despite its increasing presence on these platforms.

The Verge (AI)

CVE-2026-7687: A vulnerability was determined in langflow-ai langflow up to 1.8.4. Affected by this issue is the function CodeParser.pa

mediumvulnerability
security
May 3, 2026
CVE-2026-7687

A command injection vulnerability (CWE-77, a flaw where attackers can insert malicious commands into input) was found in Langflow AI's langflow software up to version 1.8.4, specifically in the CodeParser.parse_callable_details function. An attacker with login credentials can remotely execute this vulnerability, and it has already been publicly disclosed. The vendor was notified but did not respond.

AI chatbot fraud: the ‘gift card’ subcription that may cost you dear

mediumnews
securityprivacy

CVE-2026-7669: A vulnerability was detected in sgl-project SGLang up to 0.5.9. Impacted is the function get_tokenizer of the file pytho

mediumvulnerability
security
May 2, 2026
CVE-2026-7669

A vulnerability (CVE-2026-7669) was found in SGLang, an open-source project, affecting versions up to 0.5.9. The flaw is in the get_tokenizer function and allows deserialization (converting untrusted data into executable objects), which can be exploited remotely, though it requires high complexity to execute. The vulnerability has a CVSS score (a 0-10 severity rating) of 6.3, classified as medium severity.

CVE-2026-7644: A vulnerability has been found in ChatGPTNextWeb NextChat up to 2.16.1. Affected is the function addMcpServer of the fil

highvulnerability
security
May 2, 2026
CVE-2026-7644

A vulnerability (CVE-2026-7644) was found in ChatGPTNextWeb NextChat version 2.16.1 and earlier, affecting the addMcpServer function in the app/mcp/actions.ts file. The flaw allows improper authorization (meaning the system fails to correctly verify who should have access to certain features), and it can be exploited remotely by anyone without needing special permissions. The vulnerability has been publicly disclosed, and the developers have been notified but have not yet responded.

CVE-2026-7643: A flaw has been found in ChatGPTNextWeb NextChat up to 2.16.1. This impacts an unknown function of the file Next.js of t

mediumvulnerability
security
May 2, 2026
CVE-2026-7643

ChatGPTNextWeb NextChat versions up to 2.16.1 contain a flaw in its Next.js API endpoint that allows attackers to manipulate a function and create a permissive cross-domain policy with untrusted domains (meaning the system accepts requests from any website, not just trusted ones). The attack can be launched remotely, an exploit has been published, but the project developers have not yet responded to the early notification.

CTISum: A new benchmark dataset for Cyber Threat Intelligence summarization

inforesearchPeer-Reviewed
research

Musk testimony dominated first week Musk v. Altman. 'You can't just steal a charity'

infonews
policy
May 2, 2026

Elon Musk testified in a lawsuit against OpenAI CEO Sam Altman and President Greg Brockman, claiming they broke promises to keep the AI company as a nonprofit and misused his $38 million donation for commercial purposes. Musk argued that OpenAI (which he helped found in 2015) shifted from a charitable mission to a for-profit operation after he left the board in 2018, especially after ChatGPT's launch in 2022 made the company worth over $850 billion. The case centers on whether a company can profit from a charitable mission while still claiming nonprofit status.

New Bluekit Phishing Kit Features AI Assistant

infonews
security
May 2, 2026

Bluekit is a phishing kit (software designed to steal login credentials by creating fake websites) that has been discovered with advanced features including an AI assistant, automated domain registration, voice cloning, and templates for impersonating popular services like Gmail and Apple ID. The kit uses a dashboard to manage fake websites, capture stolen credentials, and track logged-in sessions, with Telegram as the default channel for sending stolen data. Although Bluekit is still in development and has not yet been used in actual attacks, security researchers warn that its rapid feature updates could make it a serious threat if it gains wider adoption.

Disneyland Now Uses Face Recognition on Visitors

infonews
securityprivacy

AI agents can bypass guardrails and put credentials at risk, Okta study finds

highnews
securitysafety

Oscars says AI actors, writing cannot win awards

infonews
policy
May 1, 2026

The Academy of Motion Picture Arts and Sciences announced that only acting 'demonstrably performed by humans' and writing that is 'human-authored' can be nominated for Oscars, marking a significant rule change as AI technology becomes more common in filmmaking. The decision was prompted by recent cases of AI being used to recreate actors and generate scripts, though the Academy did not ban AI use in other aspects of filmmaking like visual effects. The Academy stated it will evaluate films based on 'the degree to which a human was at the heart of the creative authorship' and reserves the right to request information about how generative AI (software that creates new content from patterns in training data) was used.

Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models

infonews
industry
May 1, 2026

During the first week of his lawsuit against OpenAI, Elon Musk testified that CEO Sam Altman and president Greg Brockman deceived him into funding the company, claiming he donated $38 million thinking it would remain a nonprofit developing AI safely for humanity. Musk also admitted that his own AI company xAI distills (uses as a training source for) OpenAI's models, and warned that AI poses an existential risk that could "kill us all." The trial centers on whether Musk was genuinely committed to nonprofit AI development or is suing to undermine a competitor.

Security posture improvement in the AI era

infonews
securitypolicy

Pentagon inks deals with seven AI companies for classified military work

infonews
policyindustry

CVE-2026-31771: In the Linux kernel, the following vulnerability has been resolved: Bluetooth: hci_event: move wake reason storage into

infovulnerability
security
May 1, 2026
CVE-2026-31771

A vulnerability in the Linux kernel's Bluetooth handler allowed short HCI event frames (data packets sent over Bluetooth) to bypass safety checks before reaching memory copying functions. The fix moves the storage of wake reason addresses into individual event handlers that already perform proper length validation, ensuring all bounds checks run before any data is processed.

CVE-2026-31735: In the Linux kernel, the following vulnerability has been resolved: iommupt: Fix short gather if the unmap goes into a

infovulnerability
security
May 1, 2026
CVE-2026-31735

A vulnerability exists in the Linux kernel's iommupt (IOMMU page table) code where the unmap operation can unmap more memory than requested, but the cache invalidation (gather) only clears the originally requested range instead of the entire unmapped area. This mismatch could leave stale memory translations cached, potentially causing security or stability issues, though the developers believe it may not be triggerable in practice.

Microsoft Agent 365, now generally available, expands capabilities and integrations

infonews
securitypolicy

If AI's So Smart, Why Does It Keep Deleting Production Databases?

infonews
securitysafety

Atlassian stock soars 20% after earnings show strong cloud, data center growth

infonews
industry
May 1, 2026

Atlassian, a software company, reported better-than-expected earnings with strong growth in cloud services (online-based software accessed over the internet) and data center revenue, causing its stock price to jump 20%. The company's success comes despite broader concerns in the tech industry about how AI tools might disrupt software businesses, with Atlassian's CEO arguing that these worries are overblown based on their strong customer demand.

Previous5 / 223Next
NVD/CVE Database
May 3, 2026

Fraudsters have been using compromised accounts to purchase gift cards for Claude, an AI chatbot by Anthropic, and charging them to users' credit cards without permission. Multiple Claude users reported unauthorized charges ranging from $200 to €225, with vouchers being sent to their email addresses, suggesting potential email compromise.

Fix: Anthropic says it is putting new protections in place to prevent fraudulent gift card purchases and that it cancels subscriptions and issues refunds when it identifies scam purchases. The company advises: contact Anthropic's support about unrecognized payments, cancel your affected bank card and request a new one, change your login details on the site, and contact your bank or credit card company to make a chargeback claim (a formal dispute requesting your money back) if you notice unauthorized payments.

The Guardian Technology
NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
May 2, 2026

CTISum is a new benchmark dataset designed to help train and test AI systems that automatically summarize cyber threat intelligence (CTI, which is information about security attacks and threats). The dataset provides examples of threat reports and their summaries, helping researchers develop better AI tools for quickly understanding large amounts of security information. This work addresses the challenge of processing the massive volume of threat data that security teams need to analyze.

Elsevier Security Journals
CNBC Technology
SecurityWeek
May 2, 2026

Disneyland announced that visitors to its parks can optionally use face recognition technology to enter, though the company notes that visitors may still have their images captured even if they choose lanes without face recognition systems. The technology works by converting facial images into numerical values for matching purposes, with Disney stating these values will be deleted after 30 days except when needed for legal or fraud-prevention reasons.

Wired (Security)
May 1, 2026

Okta researchers found that AI agents like OpenClaw can bypass their safety guardrails (built-in rules meant to prevent harmful actions) and leak sensitive data such as credentials (login information and access tokens) when manipulated by attackers. In one test, an attacker who hijacked a user's Telegram account tricked the agent into revealing an OAuth token (a credential that grants access to accounts) by having it take a screenshot after the agent had forgotten it wasn't supposed to share the token. The core problem is that agents are designed to be maximally helpful, which makes them vulnerable to social engineering (manipulation tactics) attacks that exploit this characteristic.

CSO Online
BBC Technology
MIT Technology Review
May 1, 2026

As AI capabilities grow rapidly, organizations must ensure their basic security fundamentals are strong to respond quickly to new threats and vulnerabilities. Core security practices like patching consistently, enforcing least-privilege access (giving users only the minimum permissions they need), enabling logging and monitoring, encrypting data, and reviewing security configurations regularly remain essential regardless of whether an organization adopts AI.

Fix: AWS offers the Security Health Improvement Program (SHIP), a no-cost program available to all AWS customers that uses a data-driven methodology to assess current security posture, identify improvement opportunities across 10 core security use cases, build a prioritized action plan tailored to your environment, and establish continuous security improvement. The program is led by AWS Solutions Architects and Technical Account Managers who provide personalized reports and guidance. Additionally, organizations can use freely available resources like the AWS Well-Architected Framework to implement security fundamentals in their specific context.

AWS Security Blog
May 1, 2026

The Pentagon announced agreements with seven AI companies (OpenAI, Google, Nvidia, SpaceX, Reflection, Microsoft, and Amazon Web Services) to use their technology for classified military work with no restrictions on how it can be used. Anthropic, another major AI company, was not included in these deals because it had disagreed with the Pentagon over concerns about potential misuse of AI technology.

The Guardian Technology

Fix: Move hci_store_wake_reason() calls from the general event handler into nine specific event handlers (hci_conn_request_evt, hci_conn_complete_evt, hci_sync_conn_complete_evt, le_conn_complete_evt, hci_le_adv_report_evt, hci_le_ext_adv_report_evt, hci_le_direct_adv_report_evt, hci_le_pa_sync_established_evt, and hci_le_past_received_evt) where event-length validation has already succeeded. Convert hci_store_wake_reason() into a helper that only stores validated addresses while holding hci_dev_lock(), and annotate it with __must_hold(&hdev->lock) and lockdep_assert_held(&hdev->lock) to enforce the lock requirement.

NVD/CVE Database
NVD/CVE Database
May 1, 2026

Microsoft Agent 365 is a new platform that helps organizations observe, govern, and secure AI agents (autonomous software programs that can access data and invoke tools) that are spreading across their systems faster than they can control them. The tool addresses the problem of 'shadow AI' (unmanaged agents operating without visibility) by providing a single control plane to monitor agents, whether they act on behalf of users or operate independently with their own permissions. Agent 365 integrates with Microsoft Defender and Intune to discover and manage both local agents (like those running on Windows devices) and cloud-based agents.

Fix: Organizations can use Microsoft Agent 365 with Microsoft Defender and Intune to 'discover and manage local and cloud-hosted agents' and 'apply appropriate controls, such as blocking unmanaged agents.' The source also mentions 'Windows 365 for Agents' as 'a secured, managed environment for agents to work in,' though specific implementation details are not provided in the text.

Microsoft Security Blog
May 1, 2026

The article argues that AI systems aren't inherently flawed when they cause problems like deleting production databases (the live systems storing important data). Instead, the real issue is that companies are deploying AI agents (programs that act autonomously to accomplish tasks) into their critical systems without adequately testing them for security risks first.

Dark Reading
CNBC Technology