aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
4452 items

CVE-2026-42092: titra is an open source time tracking project. In version 0.99.52, the globalsettings Meteor publication returns all glo

mediumvulnerability
security
May 4, 2026
CVE-2026-42092

Titra, an open source time tracking application, has a vulnerability in version 0.99.52 where the globalsettings Meteor publication (a feature that broadcasts data to connected users) exposes sensitive configuration information like API keys without checking if the user has admin permissions. Any authenticated user (someone logged into the system) can access these secrets through DDP (the protocol Meteor uses to send data to clients).

NVD/CVE Database

CVE-2026-42440: OOM Denial of Service via Unbounded Array Allocation in Apache OpenNLP AbstractModelReader  Versions Affected:  before

highvulnerability
security
May 4, 2026
CVE-2026-42440

Apache OpenNLP has a vulnerability where three methods in AbstractModelReader read count values from binary model files without checking if they're reasonable, allowing an attacker to trigger an OOM error (a crash caused by the program running out of memory) by creating a malicious .bin file with an extremely large count value. This denial of service (making a service unavailable) attack requires minimal file size and crashes the Java virtual machine early during model loading.

CVE-2026-42077: Evolver is a GEP-powered self-evolving engine for AI agents. Prior to version 1.69.3, a prototype pollution vulnerabilit

mediumvulnerability
security
May 4, 2026
CVE-2026-42077

Evolver, a self-evolving engine for AI agents, had a prototype pollution vulnerability (a bug where attackers inject malicious properties into core JavaScript objects) in versions before 1.69.3. The flaw existed in functions that merged user data without blocking dangerous keys like __proto__ and constructor, allowing attackers to modify how all JavaScript objects behave.

CVE-2026-42076: Evolver is a GEP-powered self-evolving engine for AI agents. Prior to version 1.69.3, a command injection vulnerability

criticalvulnerability
security
May 4, 2026
CVE-2026-42076

Evolver, a tool that helps AI agents improve themselves, had a command injection vulnerability (a security flaw where attackers trick the system into running unauthorized commands) in versions before 1.69.3. The flaw was in the _extractLLM() function, which built shell commands using simple string concatenation without cleaning the input first, allowing attackers to execute arbitrary commands on the server when certain input contained shell metacharacters (special characters that have meaning to the command system).

CVE-2026-42075: Evolver is a GEP-powered self-evolving engine for AI agents. Prior to version 1.69.3, a path traversal vulnerability in

highvulnerability
security
May 4, 2026
CVE-2026-42075

Evolver, a GEP-powered self-evolving engine for AI agents, contained a path traversal vulnerability (a type of attack where an attacker manipulates file paths to access files outside their intended directory) in versions before 1.69.3. The vulnerability was in the skill download command's --out= flag, which did not validate user-provided file paths, allowing attackers to write files to any location on the system, potentially overwriting critical files.

Anthropic teams with Goldman, Blackstone and others on $1.5 billion AI venture targeting PE-owned firms

infonews
industry
May 4, 2026

Anthropic has partnered with Goldman Sachs, Blackstone, and other investment firms to create a $1.5 billion venture that will deploy Claude, Anthropic's AI model, directly into businesses. The partnership aims to address a shortage of experts who can implement AI technology in real-world business operations by embedding engineers inside companies to redesign workflows and integrate AI into core processes, starting with companies owned by the investment firms.

AI platforms reference Nigel Farage more than other leaders when prompted on UK politics, study shows

infonews
research
May 4, 2026

A study found that AI platforms disproportionately reference Nigel Farage and Reform UK more than other UK political leaders when answering questions about British politics. Researchers suggest this indicates Reform UK has achieved unusual visibility in LLMs (large language models, AI systems trained on text data to generate responses).

Week one of the Musk v. Altman trial: What it was like in the room

infonews
policy
May 4, 2026

Elon Musk is suing OpenAI and CEO Sam Altman in federal court, claiming he invested millions expecting OpenAI to remain a nonprofit organization but alleges the company was secretly converted into a for-profit corporation, deceiving him about its original mission. The trial centers on whether Musk was actually deceived and when he discovered this alleged misconduct, with Musk seeking damages and the reversal of OpenAI's restructuring that reduced the nonprofit portion's control.

Musk texted OpenAI's Brockman about settlement two days before trial began

infonews
policy
May 4, 2026

Elon Musk, who co-founded OpenAI in 2015, is suing the company for allegedly breaking its commitment to remain a nonprofit and pursue a charitable mission, claiming they instead commercialized the AI technology. Two days before the trial started, Musk texted OpenAI's president Greg Brockman about settling the case, but when Brockman suggested both sides drop their claims, Musk responded with a threat about making him and CEO Sam Altman "the most hated men in America."

CVE-2026-7482: Ollama before 0.17.1 contains a heap out-of-bounds read vulnerability in the GGUF model loader. The /api/create endpoint

criticalvulnerability
security
May 4, 2026
CVE-2026-7482

Ollama versions before 0.17.1 have a heap out-of-bounds read vulnerability (a bug where code reads memory outside its intended boundaries) in the GGUF model loader (the component that loads GGUF files, a machine learning model format). An attacker can upload a malicious GGUF file through the /api/create endpoint (an unprotected interface) with fake tensor size information, causing the server to read beyond the file's actual data and leak sensitive information like API keys and user conversations, which can then be stolen through the /api/push endpoint.

Copirate 365 at DEF CON: Plundering in the Depths of Microsoft Copilot (CVE-2026-24299)

highnews
security
May 4, 2026

This writeup describes vulnerabilities found in Microsoft Copilot products that allow attackers to steal sensitive data through multiple attack chains, including data exfiltration via HTML preview features, hijacking the AI's long-term memory through prompt injection (tricking an AI by hiding instructions in its input), and creating persistent backdoors. The vulnerabilities, assigned CVE-2026-24299, exploited what researchers call the "lethal trifecta," where an AI has access to private data, untrusted content, and external communication channels simultaneously.

Security agencies draw red lines around agentic AI deployments

infonews
securitypolicy

OpenAI Rolls Out Advanced Security for ChatGPT Accounts

infonews
security
May 4, 2026

OpenAI has introduced Advanced Account Security, an optional feature for ChatGPT users at high risk of targeted attacks, such as journalists and political dissidents. The feature strengthens account protection by disabling password-based login in favor of physical security keys or passkeys, replacing email and SMS account recovery with backup passkeys and recovery keys, shortening sign-in sessions, and automatically excluding user conversations from AI model training.

The fake IT worker problem CISOs can’t ignore

mediumnews
securitysafety

How CISOs should utilize data security posture management to inform risk

infonews
security
May 4, 2026

Data security posture management (DSPM, the practice of finding and tracking where sensitive information is stored in an organization) helps security leaders understand their data risks and make better security decisions, even without expensive dedicated tools. The core principle is to gain visibility into where sensitive data lives, understand its value, and use that information to prioritize security investments and respond to threats more effectively.

How OpenAI delivers low-latency voice AI at scale

infonews
industry
May 3, 2026

OpenAI rearchitected its WebRTC (web real-time communication, a standard protocol for sending low-latency audio and video between clients and servers) infrastructure to handle voice AI at scale while maintaining natural conversation speed. The team addressed three constraints that conflicted at scale: one-port-per-session media termination, stateful ICE (Interactive Connectivity Establishment, the process for establishing connections across firewalls) and DTLS (Datagram Transport Layer Security, encryption for real-time data) session stability, and global routing latency. OpenAI built a new split relay plus transceiver architecture that preserves standard WebRTC behavior for users while changing how data packets are routed internally.

Privacy-preserving path constrained shortest distance queries on encrypted graphs

inforesearchPeer-Reviewed
security

US Military Reaches Deals With 7 Tech Companies to Use Their AI on Classified Systems

infonews
policysafety

CVE-2026-7700: A weakness has been identified in langflow-ai langflow up to 1.8.4. This affects the function eval of the file src/lfx/s

mediumvulnerability
security
May 3, 2026
CVE-2026-7700

A code injection vulnerability (CVE-2026-7700) was found in langflow-ai langflow up to version 1.8.4, specifically in the eval function of the LambdaFilterComponent. The vulnerability allows attackers to execute arbitrary code remotely if they have login access, and a working exploit has been publicly released.

Quoting Anthropic

infonews
safety
May 3, 2026

Anthropic researchers tested Claude (their AI assistant) for sycophancy (behavior of agreeing excessively or giving undeserved praise to please the user) by checking whether it would push back on ideas, maintain positions when challenged, and speak honestly. Overall, Claude rarely showed sycophantic behavior (only 9% of conversations), but it was more prone to this problem in conversations about spirituality (38%) and relationships (25%).

Previous4 / 223Next

Fix: 2.x users should upgrade to 2.5.9. 3.x users should upgrade to 3.0.0-M3. The fix adds an upper bound check (default 10,000,000) on the three count fields before array allocation; values that are negative or exceed the bound throw an IllegalArgumentException and fail safely. Users who cannot upgrade immediately should treat all .bin model files as untrusted input unless their origin is verified, and avoid loading models from end users or third-party repositories without integrity checks. Deployments needing higher limits can set the OPENNLP_MAX_ENTRIES system property at JVM startup (e.g., -DOPENNLP_MAX_ENTRIES=50000000).

NVD/CVE Database

Fix: Update to version 1.69.3, where this issue has been patched.

NVD/CVE Database

Fix: This issue has been patched in version 1.69.3. Users should upgrade to version 1.69.3 or later.

NVD/CVE Database

Fix: This issue has been patched in version 1.69.3. Users should upgrade to version 1.69.3 or later.

NVD/CVE Database
CNBC Technology
The Guardian Technology
MIT Technology Review
CNBC Technology

Fix: Update Ollama to version 0.17.1 or later.

NVD/CVE Database

Fix: Microsoft patched these issues. The source states: "MSRC assigned CVE-2026-24299 and the issues are now patched." No specific patch version number or detailed mitigation steps are provided in the source text.

Embrace The Red
May 4, 2026

Security agencies including CISA have issued joint guidance on safely deploying agentic AI (autonomous AI systems that can take actions independently), warning that prompt injection (tricking an AI by hiding instructions in its input) and other attacks are common threats. The advisory recommends organizations implement strict access controls using the principle of least privilege (giving systems only the minimum permissions they need), continuous monitoring with human oversight, and careful testing before deploying AI agents to production environments.

Fix: The source text outlines recommended design and development guidelines including: strong authentication using Secure by Design principles, enforcing least-privilege principles and isolating agent capabilities, maintaining a clear inventory of agent capabilities and dependencies, implementing continuous monitoring and auditing of AI agent operations, integrating human control and oversight into workflows (including live monitoring during task execution and human approval for decision-making steps), validating how agents interpret inputs to guard against prompt injection, and regular testing of incident response plans.

CSO Online

Fix: OpenAI offers Advanced Account Security as a mitigation. Users can enable this opt-in feature, which includes: disabling password-based login and requiring physical security keys or passkeys (OpenAI has partnered with Yubico to offer YubiKey devices at a discount); replacing email and SMS account recovery with backup passkeys, recovery keys, and security keys; shortening sign-in sessions; and receiving alerts about logins with the ability to manage active sessions. Users can enroll through OpenAI's dedicated enrollment page for Advanced Account Security.

SecurityWeek
May 4, 2026

Fake IT workers, increasingly enabled by AI tools and deepfakes, are being hired into organizations as an insider threat (a risk posed by trusted employees or contractors with system access). State actors like North Korea and individuals use stolen or synthetic identities, AI-assisted interview responses, and social engineering to bypass recruitment screening and gain access to sensitive systems and data.

CSO Online
CSO Online
OpenAI Blog
May 3, 2026

This research paper, published in September 2026, addresses how to find the shortest path between two points on encrypted graphs (networks where connections and data are hidden using cryptography) while keeping the query private. The work focuses on path-constrained queries, meaning the shortest route must follow specific rules or limitations, all without revealing the actual graph structure or what users are searching for.

Elsevier Security Journals
May 3, 2026

The US Pentagon has signed contracts with seven tech companies (Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX) to use their AI systems on classified military networks to help with battlefield decisions and operations. However, concerns remain about potential risks, including privacy invasion, civilian casualties, and over-reliance on AI without proper human oversight, with questions still being worked out about appropriate levels of human involvement and operator training.

Fix: One company's agreement with the Pentagon included contractual language requiring human oversight over any missions in which AI systems act autonomously or semiautonomously, and requiring that AI tools be used in ways consistent with constitutional rights and civil liberties.

SecurityWeek
NVD/CVE Database
Simon Willison's Weblog