aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,650
[LAST_24H]
1
[LAST_7D]
156
Daily BriefingSunday, March 29, 2026
>

Bluesky Launches AI-Powered Feed Customization Tool: Bluesky released Attie, an AI assistant that lets users create custom content feeds by describing what they want in plain language rather than adjusting technical settings. The tool runs on Claude (Anthropic's language model) and will integrate into apps built on Bluesky's AT Protocol.

Latest Intel

page 31/265
VIEW ALL
01

Multi-modal malware classification with hierarchical consistency and saliency-constrained adversarial training

researchsecurity
Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
Mar 16, 2026

This paper discusses the growing challenge of malware (malicious software designed to exploit computer system vulnerabilities) detection, noting that over 450,000 new malware samples are detected daily as of 2024. Traditional detection methods like signature-based detection (matching known byte patterns against a database) and behavior-based detection (running malware in isolated test environments to observe its actions) have limitations: signature-based methods fail against new or disguised malware, while behavior-based methods are computationally expensive and can be evaded by malware that detects virtual environments. The paper proposes using machine learning and deep learning approaches trained on features from both static and dynamic analysis to better classify files as malicious or benign.

Elsevier Security Journals
02

Personalized differential privacy for high-dimensional data: A random sampling and pruning privacy tree approach

securityprivacy
Mar 16, 2026

This paper discusses differential privacy (DP, a mathematical method that adds noise to data to protect individual privacy while keeping data useful), which is stronger than traditional anonymization techniques like generalization and suppression. The authors address a key challenge: existing DP methods struggle with high-dimensional data (datasets with many features) and treat all data features equally, even though real-world data has varying privacy needs, such as medical records where disease diagnoses need more protection than age.

Elsevier Security Journals
03

v0.14.18

security
Mar 16, 2026

LlamaIndex v0.14.18 is a release that deprecates Python 3.9 (stops supporting an older version of the Python programming language) across multiple packages and includes several bug fixes, such as preserving chat history during incomplete data streaming and preventing division-by-zero errors. The update also adds features like improved text filtering across different database backends and maintains dependencies across 51 directories.

LlamaIndex Security Releases
04

CVE-2026-4269 - Improper S3 ownership verification in Bedrock AgentCore Starter Toolkit

security
Mar 16, 2026

The Bedrock AgentCore Starter Toolkit (a tool for building AI agents on AWS) before version v0.1.13 has a vulnerability where it doesn't properly verify S3 ownership (S3 is AWS's cloud storage service). This missing check could allow an attacker to inject malicious code during the build process (when the software is being compiled), potentially leading to code execution in the running application. The vulnerability only affects users who built the toolkit after September 24, 2025.

Fix: Update to Bedrock AgentCore Starter Toolkit version v0.1.13 or later.

AWS Security Bulletins
05

Where OpenAI’s technology could show up in Iran

policysecurity
Mar 16, 2026

OpenAI has agreed to allow the Pentagon to use its AI technology in classified military environments, raising questions about potential applications in the escalating conflict with Iran. The article describes how OpenAI's generative AI (AI that can produce text, images, or other outputs based on patterns) could be used to help analyze potential military targets and prioritize strikes, as well as through a partnership with Anduril to defend against drone attacks, marking the first serious military testing of generative AI for real-time combat decisions.

MIT Technology Review
06

Encyclopedia Britannica is suing OpenAI for allegedly ‘memorizing’ its content with ChatGPT

securitypolicy
Mar 16, 2026

Encyclopedia Britannica and Merriam-Webster sued OpenAI, claiming it used their copyrighted content to train ChatGPT without permission and that GPT-4 (OpenAI's AI model) now outputs text that closely matches their original material. The publishers allege that OpenAI 'memorized' their content during training, meaning the AI absorbed and can reproduce substantial portions of their work.

The Verge (AI)
07

CVE-2026-4270 - AWS API MCP File Access Restriction Bypass

security
Mar 16, 2026

A vulnerability (CVE-2026-4270) exists in AWS API MCP Server versions 0.2.14 through 1.3.8, which is software that lets AI assistants interact with AWS services. The bug allows attackers to bypass file access restrictions (the security controls that limit which files an AI can read) and potentially read any file on the system, even when those restrictions are supposed to be enabled.

AWS Security Bulletins
08

GHSA-hqmj-h5c6-369m: ONNX Untrusted Model Repository Warnings Suppressed by silent=True in onnx.hub.load() — Silent Supply-Chain Attack

security
Mar 16, 2026

ONNX's onnx.hub.load() function has a security flaw where the silent=True parameter completely disables warnings and user confirmations when loading models from untrusted repositories (sources not officially verified). This means an attacker could trick an application into silently downloading and running malicious models from their own GitHub repository without the user knowing, potentially allowing theft of sensitive files like SSH keys or cloud credentials.

GitHub Advisory Database
09

CVE-2026-26133: AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.

security
Mar 16, 2026

CVE-2026-26133 is a vulnerability in Microsoft 365 Copilot where an attacker can use AI command injection (tricking the AI system by embedding hidden commands in normal-looking input) to access and disclose information over a network without authorization. The vulnerability has a CVSS score (a 0-10 rating of how severe a security flaw is) of 4.0, indicating moderate severity.

NVD/CVE Database
10

CVE-2026-25083: GROWI OpenAI thread/message API endpoints do not perform authorization. Affected are v7.4.5 and earlier versions. A logg

security
Mar 16, 2026

CVE-2026-25083 is a missing authorization vulnerability in GROWI (a collaboration platform) affecting version 7.4.5 and earlier. A logged-in user who knows the identifier of a shared AI assistant can view and modify other users' conversation threads and messages without permission, because the API endpoints don't properly verify whether the user should have access. This is rated as HIGH severity with a CVSS score (a 0-10 scale measuring vulnerability severity) of 8.7.

NVD/CVE Database
Prev1...2930313233...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026