aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4482 items

CVE-2026-7141: A vulnerability was found in vllm up to 0.19.0. The affected element is the function has_mamba_layers of the file vllm/v

mediumvulnerability
security
Apr 27, 2026
CVE-2026-7141

A vulnerability was found in vllm (a language model serving framework) up to version 0.19.0 in the has_mamba_layers function, which can result in uninitialized resource (memory that hasn't been set to a known value before use). An attacker can trigger this flaw remotely, though the attack is difficult to execute and requires high complexity.

Fix: Deploy patch 1ad67864c0c20f167929e64c875f5c28e1aad9fd to fix this issue.

NVD/CVE Database

OpenAI shakes up partnership with Microsoft, capping revenue share payments

infonews
industry
Apr 27, 2026

OpenAI and Microsoft announced a revised partnership agreement that allows OpenAI to cap its revenue share payments to Microsoft and serve customers through any cloud provider, not just Microsoft Azure. Previously, OpenAI was restricted to primarily using Microsoft's cloud services, but the new deal lets OpenAI work with competitors like Amazon and Google while maintaining Microsoft as its primary provider through 2030.

This bank CEO let his AI clone handle an earnings call — now he's signing an OpenAI deal

infonews
industry
Apr 27, 2026

Customers Bank CEO Sam Sidhu revealed that an AI clone (a digital voice generated to sound like him) delivered his prepared remarks during an earnings call, then announced a partnership with OpenAI to automate banking processes like loan approvals and account openings. The bank plans to deploy AI agents (software that can make decisions and take actions with minimal human input) across lending, deposits, and payments over the next 6-12 months, with goals including reducing loan processing time from 30-45 days to 7 days and account opening time to under 20 minutes.

Microsoft and OpenAI’s famed AGI agreement is dead

infonews
policy
Apr 27, 2026

Microsoft and OpenAI have removed a clause from their partnership agreement that previously governed what would happen if AGI (artificial general intelligence, an AI system that can do any intellectual task a human can do) was developed. Under the new terms, Microsoft remains OpenAI's primary cloud partner with first access to new products, but OpenAI now has freedom to use other cloud providers instead of being locked into Microsoft's Azure platform.

Elon Musk and Sam Altman’s court battle over the future of OpenAI

infonews
policy
Apr 27, 2026

Elon Musk, a cofounder of OpenAI, is suing the company and its leaders Sam Altman and Greg Brockman, claiming they abandoned OpenAI's original mission to develop AI for humanity's benefit and shifted focus to profit instead. OpenAI counters that the lawsuit is a baseless attempt by Musk to harm a competitor to his own AI ventures. Musk is seeking the removal of Altman and Brockman, an end to OpenAI's nonprofit status, and up to $150 billion in damages.

OpenAI available at FedRAMP Moderate

inforegulatory
policy
Apr 27, 2026

OpenAI has received FedRAMP 20x Moderate authorization (a security certification that allows U.S. government agencies to use cloud services), making ChatGPT Enterprise and the API Platform available for federal use. This certification was achieved through a faster authorization process that emphasizes cloud-native security evidence and automated validation, allowing government agencies to access advanced AI capabilities like GPT-5.5 while meeting federal security and governance requirements.

Qualcomm up 7% on report it’s partnering with OpenAI on smartphone AI chip

infonews
industry
Apr 27, 2026

Qualcomm is reportedly partnering with OpenAI and MediaTek to develop custom smartphone chips, with mass production expected in 2028. According to analyst Ming-Chi Kuo, OpenAI believes controlling both the operating system (the software that runs a device) and hardware will let it deliver comprehensive AI agent services (AI systems that can perform tasks autonomously) that use real-time smartphone data to improve performance.

Deepfake Voice Attacks are Outpacing Defenses: What Security Leaders Should Know

highnews
securitysafety

Parsing Agentic Offensive Security's Existential Threat

infonews
safetysecurity

Microsoft patched an ‘agent-only’ role that was not

highnews
security
Apr 27, 2026

Microsoft's 'Agent ID Administrator' role, designed to let AI agents have controlled identities in Entra ID (Microsoft's identity management system), had a security flaw that let users take ownership of unrelated service principals (the tenant-specific identities that applications use to authenticate and access resources). This meant attackers could gain the same privileges as more powerful administrator roles and potentially take over the entire tenant (organization's cloud environment).

The Download: DeepSeek’s latest AI breakthrough, and the race to build world models

infonews
industryresearch

Malicious AI Prompt Injection Attacks Increasing, but Sophistication Still Low: Google

lownews
securityresearch

Mythos Changed the Math on Vulnerability Discovery. Most Teams Aren't Ready for the Remediation Side

infonews
securityindustry

AI is reshaping DevSecOps to bring security closer to the code

infonews
securityindustry

The ‘manager of agents’: How AI evolves the SOC analyst role

infonews
industrysafety

Elon Musk and Sam Altman face off in court over OpenAI’s founding mission

infonews
policy
Apr 27, 2026

Elon Musk is suing Sam Altman and OpenAI, claiming they violated their founding agreement by converting OpenAI from a non-profit (an organization that doesn't aim to make money for owners) to a for-profit business. The lawsuit alleges fraud and breach of contract, with the trial beginning in Oakland, California, and expected to last two to three weeks.

Announcing our partnership with the Republic of Korea

infonews
industrypolicy

SBOMs into Agentic AIBOMs: Schema Extensions, Agentic Orchestration and Reproducibility Evaluation

inforesearchPeer-Reviewed
research

The next phase of the Microsoft OpenAI partnership

infonews
industry
Apr 27, 2026

Microsoft and OpenAI amended their partnership agreement to clarify their long-term relationship and how they will work together on AI development. Key changes include OpenAI gaining freedom to sell products through any cloud provider (not just Microsoft's Azure), Microsoft receiving a non-exclusive license to OpenAI's technology through 2032, and changes to how the companies share revenue. The amendment aims to give both companies flexibility while maintaining their collaborative work on building large-scale AI systems.

Choco automates food distribution with AI agents

infonews
industry
Apr 26, 2026

Choco, an AI-powered food distribution platform serving over 100,000 buyers, uses OpenAI APIs to power AI agents that automate order processing from multiple input types (emails, texts, images, voice calls). OrderAgent and VoiceAgent convert unstructured customer inputs into structured ERP (enterprise resource planning, a system that manages business operations) orders by learning from each customer's ordering history, achieving up to 50% reduction in manual work and error rates below 1-5%.

Previous15 / 225Next
CNBC Technology
CNBC Technology
The Verge (AI)
The Verge (AI)
OpenAI Blog
CNBC Technology
Apr 27, 2026

Deepfake voice and video attacks (AI-generated replicas of real people) are becoming increasingly common and costly, with tools that require only three seconds of audio and cost almost nothing to create. Attackers target finance employees and IT staff by impersonating executives on calls or video meetings to authorize large money transfers or credential changes, and these attacks bypass traditional security tools because they rely on tricking people rather than exploiting software vulnerabilities. Organizations that have successfully stopped these attacks all used the same defense: training employees to pause and verify requests before acting on them.

Fix: The source explicitly states: 'The organizations that have stopped these attacks all found the same answer: train your people to pause and verify before they act.' No specific training program, tool, or technical mitigation is detailed in the text.

BleepingComputer
Apr 27, 2026

Some people worry that advanced frontier LLMs (large language models, AI systems trained on massive amounts of text) like Claude Mythos and GPT-5.5 could cause serious cybersecurity problems by being misused for attacks. However, security researcher Ari Herbert-Voss suggests this situation could also present opportunities.

Dark Reading

Fix: Microsoft patched the issue by blocking the Agent ID Administrator role from modifying non-agent service principals. The fix was fully rolled out by April 9, 2026, across all cloud environments.

CSO Online
Apr 27, 2026

DeepSeek released V4, a new AI model that can process longer text more efficiently and matches the performance of leading competitors from OpenAI, Anthropic, and Google while remaining open source. Researchers are increasingly focused on developing world models (AI systems that understand and can interact with the physical world, not just digital tasks) to overcome limitations of current language models and enable advances in robotics and physical tasks like laundry folding or navigation.

MIT Technology Review
Apr 27, 2026

Google researchers found that indirect prompt injection attacks (hidden traps where malicious instructions in external data trick AI systems into bypassing their safety rules) on websites are increasing, with a 32% rise between November 2025 and February 2026, but current attacks remain relatively unsophisticated. The attacks they discovered fell into two categories: exfiltration attempts that try to steal data like IP addresses and credentials, and destruction attempts that aim to delete files, though neither showed advanced techniques. Researchers warn that while today's attacks are low in sophistication, the upward trend suggests the threat will soon grow in both scale and complexity.

SecurityWeek
Apr 27, 2026

Anthropic's Claude Mythos is an AI system that can discover vulnerabilities much faster than human teams, but organizations are unprepared for the remediation (fixing) side of the process. The real problem isn't finding vulnerabilities quickly, it's that most teams lack the infrastructure to triage, prioritize, and verify fixes once they're discovered, so faster discovery just creates a growing backlog of unfixed critical issues.

The Hacker News
Apr 27, 2026

AI is transforming DevSecOps (the practice of integrating security into software development processes) by embedding security checks earlier in coding and automating vulnerability detection and fixes. The shift moves security from happening after code is written to happening during code generation itself, with AI tools providing secure coding guidance, scanning for vulnerabilities using reasoning rather than fixed rules, and suggesting automated fixes integrated directly into developer workflows.

CSO Online
Apr 27, 2026

Rather than eliminating SOC analyst jobs, agentic AI (AI systems that can independently execute tasks) is transforming entry-level analysts from performing repetitive investigative work into 'managers of agents' who oversee AI systems and make decisions based on their findings. The shift moves analysts from manually gathering evidence across multiple systems to reviewing AI-generated investigations and validating conclusions, allowing them to handle more alerts at a higher level of judgment.

CSO Online
The Guardian Technology
Apr 27, 2026

Google DeepMind announced a partnership with South Korea's Ministry of Science and ICT to advance AI research and development in the country. The collaboration includes establishing an AI Campus in Seoul where Korean researchers can access Google's advanced AI models for breakthroughs in life sciences, weather, climate, and energy, while also supporting talent development through internships and scholarships.

DeepMind Safety Research
Apr 27, 2026

This academic paper explores how Software Bill of Materials (SBOMs, detailed lists of all software components used in a project) can be extended to cover agentic AI systems (AI systems that can independently make decisions and take actions). The paper discusses schema extensions, how to organize and orchestrate these agentic components, and methods to evaluate whether AI systems produce reproducible results.

ACM Digital Library (TOPS, DTRAP, CSUR)
OpenAI Blog

Fix: The source explicitly recommends three practices: (1) 'Start with evaluation from day one: Even a small ground-truth dataset (10–20 examples) enables teams to measure progress, validate improvements, and iterate with confidence.' (2) 'Invest in AI-native observability: Debugging AI systems requires more than traditional logs—capturing model inputs, outputs, and reasoning traces is essential to understand and improve performance.' (3) 'Set the right expectations early: Unlike deterministic software, LLMs are probabilistic. Educating teams and users on this difference is key to building trust and avoiding friction during adoption.'

OpenAI Blog