aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3115 items

5 key priorities for your RSAC 2026 agenda

infonews
securitypolicy
Mar 19, 2026

RSA Conference 2026 is fundamentally organized around AI security, with 40% of sessions focused on how AI affects cybersecurity across all tracks. CISOs face a dual challenge: adopting AI quickly to stay competitive while simultaneously securing enterprise systems against new threats that AI itself creates. The conference prioritizes five learning areas: securing the AI stack (including RAG workflows, LLM data pipelines, and prompt injection attacks), AI governance and regulatory compliance, managing non-human identities (AI agents and service accounts that now outnumber human users), addressing shadow AI risks (unsanctioned tools and AI-generated code), and implementing autonomous security operations.

CSO Online

How we monitor internal coding agents for misalignment

infonews
safetysecurity

Anthropic ban heralds new era of supply chain risk — with no clear playbook

infonews
policysecurity

Secure Homegrown AI Agents with CrowdStrike Falcon AIDR and NVIDIA NeMo Guardrails

infonews
securitysafety

Cloud Access Security Broker – ein Kaufratgeber

infonews
security
Mar 18, 2026

A Cloud Access Security Broker (CASB) is a monitoring tool that sits between a company's devices and cloud services to monitor user activity, enforce access rules, and detect security threats. CASBs are increasingly used to protect data in hybrid cloud environments (where some data is on-premises and some in the cloud), enforce compliance with data protection regulations, secure remote work access, and detect malicious activity. Organizations should look for CASBs that offer visibility into cloud usage, granular control over user permissions, data protection features, and compliance support, and should ensure the tool integrates well with their existing cloud services and security systems.

OpenAI to acquire Astral

infonews
industry
Mar 18, 2026

OpenAI is acquiring Astral, a company that builds popular open source Python development tools like uv (for managing code dependencies), Ruff (for checking code quality), and ty (for type safety). After the acquisition closes, OpenAI plans to integrate these tools with Codex (its AI system for code generation) so that AI can work alongside the tools developers already use throughout their entire workflow, from planning changes to maintaining software over time.

CVE-2026-20131: Cisco Secure Firewall Management Center (FMC) Software and Cisco Security Cloud Control (SCC) Firewall Management Deserialization of Untrusted Data Vulnerability

infovulnerability
security
Mar 18, 2026
CVE-2026-20131

Cisco Secure Firewall Management Center (FMC) and Cisco Security Cloud Control (SCC) contain a deserialization of untrusted data vulnerability (a flaw where the software unsafely processes data that could contain malicious code) in their web management interfaces. An unauthenticated attacker (someone without login credentials) can remotely execute arbitrary Java code with root privileges (the highest level of system access) on affected devices. This vulnerability is currently being actively exploited by attackers.

Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally

infonews
research
Mar 18, 2026

Researchers successfully ran a very large AI model (Qwen 397B, a Mixture-of-Experts model where each response only uses a subset of the total weights) on a MacBook Pro by using Apple's "LLM in a Flash" technique, which stores model data on the fast SSD storage and pulls it into RAM as needed rather than keeping everything in memory at once. They used Claude to run 90 experiments and generate optimized code that achieved 5.5+ tokens per second (response speed) by quantizing (reducing precision of) the expert weights to 2-bit while keeping other parts at full precision. The final setup used only 5.5GB of constant memory while streaming the remaining 120GB of compressed model weights from disk on demand.

CVE-2025-15031: A vulnerability in MLflow's pyfunc extraction process allows for arbitrary file writes due to improper handling of tar a

highvulnerability
security
Mar 18, 2026
CVE-2025-15031

MLflow, a machine learning platform, has a vulnerability (CVE-2025-15031) in how it extracts model files from compressed archives. The issue is that the software uses `tarfile.extractall` (a Python function that unpacks compressed tar files) without checking whether file paths are safe, allowing attackers to use specially crafted archives with `..` (parent directory references) or absolute paths to write files outside the intended folder. This could let attackers overwrite files or execute malicious code, especially in shared environments or when processing untrusted model files.

Navigating Security Tradeoffs of AI Agents

infonews
securitysafety

GHSA-gjgx-rvqr-6w6v: Mesop Affected by Unauthenticated Remote Code Execution via Test Suite Route /exec-py

criticalvulnerability
security
Mar 18, 2026
CVE-2026-33057

Mesop contains a critical vulnerability in its testing module where a `/exec-py` route accepts Python code without any authentication checks and executes it directly on the server. This allows anyone who can send an HTTP request to the endpoint to run arbitrary commands on the machine hosting the application, a flaw known as unauthenticated remote code execution (RCE, where an attacker runs commands on a system they don't own).

GHSA-8qvf-mr4w-9x2c: Mesop has a Path Traversal utilizing `FileStateSessionBackend` leads to Application Denial of Service and File Write/Deletion

criticalvulnerability
security
Mar 18, 2026
CVE-2026-33054

Mesop has a path traversal vulnerability (a technique where an attacker uses sequences like `../` to escape intended directory boundaries) in its file-based session backend that allows attackers to read, write, or delete arbitrary files on the server by crafting malicious `state_token` values in messages sent to the `/ui` endpoint. This can crash the application or give attackers unauthorized access to system files.

ChatGPT did not cure a dog’s cancer

infonews
safety
Mar 18, 2026

A story claimed that ChatGPT helped cure an Australian entrepreneur's dog of cancer, generating widespread attention as evidence that AI could revolutionize medicine. However, the article suggests this narrative is more complicated than the promoted version, implying the reality behind the claim differs from what was publicly reported.

Privacy-Preserving Spatio-Temporal Keyword Query with Verifiability for Location-Based Services

inforesearchPeer-Reviewed
security

GHSA-22cc-p3c6-wpvm: h3 has a Server-Sent Events Injection via Unsanitized Newlines in Event Stream Fields

highvulnerability
security
Mar 18, 2026
CVE-2026-33128

The h3 library has a vulnerability in its Server-Sent Events (SSE, a protocol for pushing real-time messages from a server to connected clients) implementation where newline characters in message fields are not removed before being sent. An attacker who controls any message field (id, event, data, or comment) can inject newline characters to break the SSE format and trick clients into receiving fake events, potentially forcing aggressive reconnections or manipulating which past events are replayed.

GHSA-4663-4mpg-879v: SiYuan has Stored XSS to RCE via Unsanitized Bazaar README Rendering

mediumvulnerability
security
Mar 18, 2026
CVE-2026-33066

SiYuan's Bazaar (a community marketplace for plugins and themes) renders package README files without sanitizing HTML, allowing malicious package authors to embed JavaScript that runs when users view package details. Because SiYuan runs on Electron (a framework for building desktop apps) with `nodeIntegration: true` (allowing JavaScript to access system-level commands), this vulnerability escalates from XSS (cross-site scripting, where attackers inject malicious code into web pages) to full remote code execution (the ability to run any command on the user's computer).

'Claudy Day’ Trio of Flaws Exposes Claude Users to Data Theft

highnews
security
Mar 18, 2026

Researchers discovered three connected flaws in Claude (an AI assistant) that can work together to steal user data, starting with a prompt injection attack (tricking the AI by hiding malicious instructions in its input) combined with a Google search vulnerability. This attack chain could potentially compromise enterprise networks that rely on Claude.

Shadow AI Risk: How SaaS Apps Are Quietly Enabling Massive Breaches

infonews
securitysafety

GHSA-3xm7-qw7j-qc8v: SSRF in @aborruso/ckan-mcp-server via base_url allows access to internal networks

mediumvulnerability
security
Mar 18, 2026
CVE-2026-33060

The @aborruso/ckan-mcp-server tool allows attackers to make HTTP requests to any address by controlling the `base_url` parameter, which has no validation or filtering. An attacker can use prompt injection (tricking the AI by hiding instructions in its input) to make the tool scan internal networks or steal cloud credentials, but exploitation requires the victim's AI assistant to have this server connected.

GHSA-rf6x-r45m-xv3w: Langflow is Missing Ownership Verification in API Key Deletion (IDOR)

highvulnerability
security
Mar 18, 2026
CVE-2026-33053

Langflow has a security flaw called IDOR (insecure direct object reference, where an attacker can access or modify resources belonging to other users) in its API key deletion feature. An authenticated attacker can delete other users' API keys by guessing their IDs, because the deletion endpoint doesn't verify that the API key belongs to the person making the request. This could allow attackers to disable other users' integrations or take over their accounts.

Previous4 / 156Next
Mar 19, 2026

OpenAI has built a monitoring system for coding agents (AI systems that can autonomously write and execute code) used internally to detect misalignment, which occurs when an AI's behavior doesn't match its intended purpose. The system uses GPT-5.4 Thinking to review agent interactions within 30 minutes, flag suspicious actions, and alert teams so they can quickly respond to potential security issues.

Fix: OpenAI's explicit mitigation involves deploying a low-latency internal monitoring system powered by GPT-5.4 Thinking at maximum reasoning effort that reviews agent interactions and automatically alerts for actions inconsistent with user intent or violating internal security or compliance policies. The source states the monitor currently reviews interactions within 30 minutes of completion and that 'as the latency decreases towards near real-time review, the security benefits increase significantly,' with the eventual goal of evaluating coding agent actions before they are taken. The source also recommends that 'similar safeguards should be standard for internal coding agent deployments across the industry.'

OpenAI Blog
Mar 19, 2026

The Trump administration has banned AI company Anthropic from Pentagon systems as a "supply chain risk," requiring government contractors to remove the company's technology within 180 days. However, most organizations lack complete visibility into where and how AI systems are used across their networks, making it extremely difficult to identify and remove Anthropic technology when it may be embedded in applications, APIs (application programming interfaces, which allow software to communicate), developer tools, or third-party services.

CSO Online
Mar 19, 2026

AI agents (autonomous programs that perform tasks without constant human input) face security risks when deployed in business environments, as a compromised agent could expose customer data or execute unauthorized actions. CrowdStrike Falcon AIDR (AI Detection and Response, a security monitoring system) now supports NVIDIA NeMo Guardrails (an open-source library that adds safety constraints to AI systems) as of version 0.20.0, enabling developers to add security controls like blocking prompt injection attacks (tricking an AI by hiding instructions in its input), redacting sensitive data, and moderating restricted topics.

Fix: Organizations should use CrowdStrike Falcon AIDR with NVIDIA NeMo Guardrails to implement security controls. Specifically: start with monitoring mode to understand threats, then progressively enforce blocks and redactions as agents move from development to production. The solution includes over 75 built-in classification rules and support for custom data classification to block prompt injection attacks, redact sensitive data like account numbers and SSNs, detect hardcoded secrets, block code injection attempts, and moderate unwanted topics to ensure compliance.

CrowdStrike Blog
CSO Online
OpenAI Blog

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. The deadline for remediation is 2026-03-22.

CISA Known Exploited Vulnerabilities
Simon Willison's Weblog
NVD/CVE Database
Mar 18, 2026

AI agents, like the open-source Clawdbot, are becoming more powerful and autonomous but introduce serious security risks because attackers can compromise them through the open-source supply chain. Two major attack types threaten AI systems: model file attacks (where malicious code is hidden in AI model files uploaded to trusted repositories) and rug pull attacks (where attackers compromise MCP servers, which are tools that give AI agents capabilities, to perform malicious actions). The article notes that traditional security methods don't yet exist for protecting AI agents, and a single corrupted component can spread threats across many teams.

Fix: The source explicitly recommends: 'Teams must scan model files with tools that can parse machine learning formats, and load models in isolated containers, virtual machines or browser sandboxes.' For rug pull attacks specifically, the article states that 'the alternative is to use remote MCP servers whose code is maintained by trusted organizations' like GitHub, which 'reduces the risk of an MCP rug pull attack' (though it does not prevent malicious actions from the tools themselves).

Palo Alto Unit 42
GitHub Advisory Database
GitHub Advisory Database
The Verge (AI)
Mar 18, 2026

This research paper presents a method for searching location-based services (apps that use your geographic position, like finding nearby restaurants) while protecting user privacy and ensuring the results are trustworthy. The approach combines spatio-temporal (location and time-based) keyword searching with verifiability (a way to prove the results are correct), allowing users to query location services without exposing their exact location or search patterns to the service provider.

Elsevier Security Journals
GitHub Advisory Database

Fix: Update to SiYuan version 3.5.10 or later. The vulnerability affects SiYuan <= 3.5.9.

GitHub Advisory Database
Dark Reading
Mar 18, 2026

Shadow AI refers to AI systems hidden within SaaS applications (software services accessed online) that operate without proper oversight, creating security risks that can lead to major data breaches. The article emphasizes that organizations lack visibility into these autonomous AI systems and calls for better monitoring and control mechanisms to manage agentic AI (AI that can independently take actions to achieve goals).

SecurityWeek

Fix: The source explicitly recommends: (1) Validate `base_url` against a configurable allowlist of permitted CKAN portals, (2) Block private IP ranges (RFC 1918, link-local addresses like 169.254.x.x), (3) Block cloud metadata endpoints (169.254.169.254), (4) Sanitize SQL input for datastore queries, and (5) Implement a SPARQL endpoint allowlist.

GitHub Advisory Database

Fix: Modify the delete_api_key endpoint and function by: (1) passing current_user to the delete function; (2) adding a verification check in delete_api_key() that confirms api_key.user_id == current_user.id before deletion; (3) returning a 403 Forbidden error if the user doesn't own the key. Example code provided: 'if api_key.user_id != user_id: raise HTTPException(status_code=403, detail="Unauthorized")'

GitHub Advisory Database