aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4482 items

CVE-2026-40979: In Spring AI, having access to a shared environment can expose the ONNX model used by the application. Affected version

mediumvulnerability
security
Apr 28, 2026
CVE-2026-40979

CVE-2026-40979 is a security flaw in Spring AI (a framework for building AI applications) where someone with access to a shared computing environment can find and view the ONNX model (a type of machine learning model file) that the application uses. This vulnerability affects Spring AI versions 1.0.0 through 1.0.5 and 1.1.0 through 1.1.4.

Fix: Fixed in Spring AI version 1.0.6 and version 1.1.5.

NVD/CVE Database

What CISOs need to get right as identity enters the agentic era

infonews
securitypolicy

CVE-2026-7235: A security vulnerability has been detected in ErlichLiu claude-agent-sdk-master up to b185aa7ff0d864581257008077b4010fca

mediumvulnerability
security
Apr 28, 2026
CVE-2026-7235

A path traversal vulnerability (a bug where an attacker manipulates file paths to access files they shouldn't) was found in the ErlichLiu claude-agent-sdk, affecting a file called app/api/agent-output/route.ts. An attacker can exploit this remotely by manipulating the outputFile parameter, and the vulnerability has already been publicly disclosed. The project uses continuous updates but has not yet responded to the security report.

CrowdStrike Expands ChatGPT Enterprise Integration with Enhanced Audit Logging and Activity Monitoring

infonews
securitypolicy

Microsoft Patches Entra ID Role Flaw That Enabled Service Principal Takeover

highnews
security
Apr 28, 2026

Microsoft fixed a security flaw in Entra ID (Microsoft's identity management system) where the Agent ID Administrator role, meant for AI agents, could be abused to take over service principals (accounts that applications use to authenticate). An attacker with this role could become the owner of any service principal and add their own credentials, potentially gaining broad control over a tenant (organization's cloud environment) if the targeted service principal had elevated permissions.

Meta, Google, OpenAI among Big Tech firms seeing top staff leaving to launch AI startups

infonews
industry
Apr 28, 2026

Top researchers from major AI companies like Google DeepMind, Meta, and OpenAI are leaving to start their own AI startups, which are raising hundreds of millions of dollars in funding. These new companies can focus on research areas that large tech firms deprioritize, such as new AI architectures and interpretability (understanding how AI systems make decisions), giving them a competitive advantage in the rapidly growing AI market.

Jury selection in Musk v. Altman: ‘People don’t like him’

infonews
industry
Apr 27, 2026

This is not an AI/LLM-related item. The content describes jury selection in a legal case between Elon Musk and Sam Altman over OpenAI disputes, focusing on prospective jurors' negative personal opinions about Musk. It does not discuss any AI technology, security vulnerabilities, or technical issues related to large language models or AI systems.

Introducing talkie: a 13B vintage language model from 1930

infonews
research
Apr 27, 2026

Researchers have created talkie, a 13 billion-parameter language model (a neural network with 13 billion adjustable values) trained entirely on English text from before 1931 to study how AI performs on historical knowledge and invention tasks. The base model uses only out-of-copyright data, but the chat version required fine-tuning (additional training to adjust behavior) with help from modern AI systems like Claude, which introduced some knowledge from after 1931 that the researchers are working to eliminate.

Our commitment to community safety

infonews
safetypolicy

CVE-2026-32202: Microsoft Windows Protection Mechanism Failure Vulnerability

infovulnerability
security
Apr 27, 2026
CVE-2026-32202🔥 Actively Exploited

CVE-2024-1708: ConnectWise ScreenConnect Path Traversal Vulnerability

infovulnerability
security
Apr 27, 2026
CVE-2024-1708EPSS: 53.7%🔥 Actively Exploited

OpenAI models, Codex, and Managed Agents come to AWS

infonews
industry
Apr 27, 2026

OpenAI and AWS have expanded their partnership to make OpenAI's models, including GPT-5.5, available through Amazon Bedrock (AWS's managed service for using AI models). This integration lets enterprises use OpenAI's capabilities within their existing AWS security systems, workflows, and infrastructure, with three new offerings: OpenAI models on AWS, Codex (a coding assistant used by over 4 million people weekly) on AWS, and Amazon Bedrock Managed Agents for building AI agents that can execute multi-step workflows.

Elon Musk and Sam Altman are going to court over OpenAI’s future

infonews
policy
Apr 27, 2026

Elon Musk is suing OpenAI CEO Sam Altman and president Greg Brockman, alleging they deceived him into funding the company by promising to keep it as a nonprofit focused on beneficial AI, then secretly restructured it into a for-profit operation. The trial could determine whether OpenAI can operate as a for-profit company and may result in removing current leadership or forcing the company back to nonprofit status. The case highlights a fundamental conflict over OpenAI's mission: whether it should prioritize open-source AI for public benefit or operate for financial gain to fund more advanced development.

CVE-2026-7178: A weakness has been identified in ChatGPTNextWeb NextChat up to 2.16.1. This affects the function storeUrl of the file a

highvulnerability
security
Apr 27, 2026
CVE-2026-7178

A vulnerability (CVE-2026-7178) was found in ChatGPTNextWeb NextChat up to version 2.16.1 that allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted requests to other systems) through the storeUrl function in the Artifacts Endpoint. The flaw can be exploited remotely, and the attack code has been made public, though the project developers have not yet responded to the early notification.

CVE-2026-7177: A security flaw has been discovered in ChatGPTNextWeb NextChat up to 2.16.1. Affected by this issue is the function prox

highvulnerability
security
Apr 27, 2026
CVE-2026-7177

A security flaw has been found in ChatGPTNextWeb NextChat up to version 2.16.1 that allows server-side request forgery (SSRF, where an attacker tricks a server into making unwanted requests to other systems). The vulnerability exists in the proxyHandler function and can be exploited remotely, with public exploits already available. The developers have been notified but have not yet responded.

Canonical lays out a plan for AI in Ubuntu Linux

infonews
industry
Apr 27, 2026

Canonical, the company behind Ubuntu Linux (a popular operating system), plans to add AI features to its system over the next year. These features will work in two ways: some will improve existing system functions quietly in the background, while others will be designed specifically for users who want AI-powered tools and workflows. The features will include accessibility improvements like better speech-to-text conversion and other AI-powered capabilities.

CVE-2026-7191- Arbitrary Code Execution via Sandbox Bypass in QnABot on AWS

criticalvulnerability
security
Apr 27, 2026

QnABot on AWS (a conversational AI tool built with Amazon Lex and other AWS services) has a vulnerability where administrators can run arbitrary code (unintended commands) by exploiting improper use of the static-eval npm package through the Content Designer interface, potentially giving them access to sensitive backend resources like databases and environment variables that should be protected.

Tracking the history of the now-deceased OpenAI Microsoft AGI clause

infonews
policy
Apr 27, 2026

Microsoft and OpenAI had a contract clause stating that if AGI (artificial general intelligence, meaning AI systems that outperform humans at most economically valuable work) was achieved, Microsoft would lose its commercial rights to OpenAI's technology. On April 27, 2026, this clause effectively ended when Microsoft's license became non-exclusive and Microsoft stopped paying revenue shares to OpenAI, with payments continuing regardless of technological progress.

Google employees ask Sundar Pichai to say no to classified military AI use

infonews
policysafety

CVE-2026-31689: In the Linux kernel, the following vulnerability has been resolved: EDAC/mc: Fix error path ordering in edac_mc_alloc()

infovulnerability
security
Apr 27, 2026
CVE-2026-31689

A bug in the Linux kernel's EDAC (error detection and correction) memory controller code causes a crash when memory allocation fails, because the code tries to clean up a device before it has been properly initialized. The fix reorders the initialization steps so the device is set up before the cleanup code can be called.

Previous14 / 225Next
Apr 28, 2026

As AI agents become more common, security leaders (CISOs, Chief Information Security Officers) face new challenges because these non-human identities are harder to track and verify than human users, and traditional security signals no longer work. The source recommends treating identity as the foundation of security architecture, with advice including maintaining clean directories, creating complete inventories of non-human identities (AI agents and service accounts), enforcing least privilege access (giving users only the permissions they need), using phishing-resistant authentication methods beyond SMS, and assuming that credentials may be compromised.

Fix: The source recommends several specific steps: (1) 'Build a strong foundation before layering on complexity' by getting 'clean directories, enforced least privilege, and reliable offboarding processes' in place; (2) 'Design for the new class of identities' by starting 'from least privilege rather than from legacy'; (3) 'Get your non-human identity inventory in order' by building 'a full inventory of non-human identities and include who is responsible for each identity, and what each one is authorized to do'; (4) 'Treat MFA as a starting point, not a destination' by including 'phishing-resistant alternatives to SMS or push-based MFA' along with 'least privilege, micro-segmentation, and continuous monitoring'; and (5) 'Assume credentials may be compromised and architect accordingly.'

CSO Online
NVD/CVE Database
Apr 28, 2026

CrowdStrike has expanded its ChatGPT Enterprise integration to provide deeper monitoring of how organizations use AI, including tracking user authentication, administrative changes, tool usage, and conversations. As AI becomes embedded in business operations across departments, security teams need visibility into not just who has access to ChatGPT Enterprise, but how the platform is actually being used and what data might be accessed. The expanded integration uses OpenAI's logging capabilities to detect suspicious activity like unusual login patterns and behavioral anomalies, shifting from just knowing the configuration of AI systems to actively monitoring their real-time usage.

Fix: Organizations can use CrowdStrike Falcon Shield's expanded ChatGPT Enterprise integration, which ingests and analyzes events from OpenAI's Compliance Logs Platform to provide continuous monitoring and detection. According to the source, this enables detection of suspicious authentication activity (malicious IP access, anonymized connections, unusual VPN sign-ins), behavioral anomalies (simultaneous logins from untrusted networks, unexpected browser or OS changes), and monitoring of administrative updates and GPT configuration changes. The integration correlates ChatGPT Enterprise activity with identity, device, and SaaS telemetry across the CrowdStrike Falcon platform to detect and respond to suspicious AI activity.

CrowdStrike Blog

Fix: Microsoft rolled out a patch on April 9, 2026 across all cloud environments. Following the fix, any attempt to assign ownership over non-agent service principals using the Agent ID Administrator role is now blocked and displays a "Forbidden" error message. Organizations are also advised to monitor sensitive role usage related to service principal ownership or credential changes, track service principal ownership changes, secure privileged service principals, and audit credential creation on service principals.

The Hacker News
CNBC Technology
The Verge (AI)

Fix: The talkie team states they 'aspire to eventually move beyond this limitation' by using 'vintage base models themselves as judges to enable a fully bootstrapped era-appropriate post-training pipeline,' meaning they plan to use talkie's own historical knowledge rather than modern AI systems for future training adjustments. However, this is described as a future goal, not a solution currently implemented.

Simon Willison's Weblog
Apr 27, 2026

OpenAI describes its safety approach for ChatGPT to prevent misuse for violence, threats, or harm. The system is trained to distinguish between harmful requests and legitimate questions about violence for educational or historical reasons, while using detection systems and expert guidance to identify concerning patterns across conversations and take action like revoking access when needed.

OpenAI Blog

Microsoft Windows Shell has a protection mechanism failure vulnerability that lets attackers perform spoofing (impersonating someone or something else) over a network without authorization. This vulnerability is actively being exploited by real attackers, making it a serious security concern.

Fix: Apply mitigations per Microsoft vendor instructions, follow applicable BOD 22-01 guidance for cloud services (government cybersecurity directives), or discontinue use of the product if mitigations are unavailable. The due date for remediation is 2026-05-12.

CISA Known Exploited Vulnerabilities

ConnectWise ScreenConnect has a path traversal vulnerability (a flaw that lets attackers access files outside their intended directory) that could allow attackers to run remote code or steal sensitive data from critical systems. This vulnerability is actively being exploited by real attackers in the wild.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
OpenAI Blog
MIT Technology Review
NVD/CVE Database
NVD/CVE Database
The Verge (AI)
AWS Security Bulletins
Simon Willison's Weblog
Apr 27, 2026

Over 600 Google employees, including many from DeepMind (Google's AI research lab), signed a letter asking CEO Sundar Pichai to prevent the Pentagon from using Google's AI models for classified purposes (secret military projects). The employees argue that the only way to ensure Google isn't associated with potential harms from such uses is to reject these classified projects entirely, since otherwise they could happen without employee knowledge or oversight.

The Verge (AI)

Fix: Reorder the calling sequence so that the device is initialized and thus the release function pointer is properly set before it can be used.

NVD/CVE Database