aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
2,829
[LAST_24H]
3
[LAST_7D]
160
Daily BriefingMonday, April 6, 2026
>

Attackers Exploit AI Systems as Infrastructure for Attacks: Adversaries are increasingly abusing legitimate AI services for malicious operations, including poisoning MCP servers (tools that connect AI assistants to external services) in supply chains, using AI platforms like Claude and Copilot as command-and-control channels (hidden pathways for sending instructions to compromised systems), and hijacking AI agents (automated systems that perform tasks) to exfiltrate data or execute destructive actions. This represents an evolution beyond prompt injection (tricking an AI by hiding instructions in its input) toward sophisticated agent hijacking techniques.

>

AI Security Tools Create New Vendor Lock-In Risks: Commercial AI-powered security products are generating a distinct form of platform dependency through proprietary training data, vendor-specific threat intelligence feeds (collections of indicators showing cyber attacks), and specialized hardware requirements. Organizations face significant migration costs and technical barriers when attempting to switch providers.

Latest Intel

page 280/283
VIEW ALL
01

MLOps - Operationalizing the machine learning model

research
Sep 5, 2020

Operationalizing an ML model (putting it into production so it can be used by real applications) involves deploying the trained model to a web server so it can make predictions. The author found that integrating TensorFlow (a popular ML framework) with Golang was unexpectedly complicated, so they chose Python instead for their web server.

Critical This Week5 issues
critical

GHSA-jjhc-v7c2-5hh6: LiteLLM: Authentication bypass via OIDC userinfo cache key collision

CVE-2026-35030GitHub Advisory DatabaseApr 3, 2026
Apr 3, 2026
Embrace The Red
02

Husky AI: Building a machine learning system

research
Sep 4, 2020

This post describes how the author built Husky AI, a machine learning system that classifies images as huskies or non-huskies, using a convolutional neural network (CNN, a type of AI model designed to process images). The author gathered about 1,300 husky images and 3,000 other images using Bing Image Search, then organized them into separate training and validation folders to build and test the model. The post notes a potential security risk: attackers could poison either the training or validation image sets to cause the model to perform poorly.

Embrace The Red
03

The machine learning pipeline and attacks

researchsecurity
Sep 2, 2020

This post introduces the machine learning pipeline, which consists of sequential steps from collecting training images, pre-processing data, defining and training a model, evaluating performance, and finally deploying it to production as an API (application programming interface, a way for software to communicate). The author uses a "Husky AI" example application that identifies whether uploaded images contain huskies, and explains that understanding this pipeline's components is important for identifying potential security attacks on machine learning systems.

Embrace The Red
04

Getting the hang of machine learning

securityresearch
Sep 1, 2020

A security researcher describes their year-long study of machine learning and AI fundamentals, with the goal of understanding how to build and then attack ML systems. The post outlines their learning approach, courses, and materials for others interested in starting adversarial machine learning (attacking ML systems).

Embrace The Red
05

Race conditions when applying ACLs

security
Aug 24, 2020

Race conditions in ACL (access control list, the rules that determine who can access files) application occur when a system creates a sensitive file but there is a time gap before permissions are applied to protect it, potentially allowing attackers to access the file during that window. This type of vulnerability exploits the timing between file creation and permission lockdown to expose sensitive information.

Embrace The Red
06

Red Teaming Telemetry Systems

securitysafety
Aug 12, 2020

Telemetry (data collected about how users interact with software) is often used by companies to make business decisions, but telemetry pipelines (the systems that collect and process this data) can be vulnerable to attacks. A red team security test demonstrated this by spoofing telemetry requests to falsely show a Commodore 64 as the most popular operating system, which could mislead companies into making poor decisions based on fake usage data.

Fix: The source mentions that internal red teams should run security assessments of telemetry pipelines. According to the text, this ensures that 'pipelines are assessed and proper sanitization, sanity checks, input validation for telemetry data is in place.' However, no specific technical fix, patch version, or concrete implementation details are provided.

Embrace The Red
07

Illusion of Control: Capability Maturity Models and Red Teaming

security
Jul 31, 2020

This article discusses how to measure the maturity and effectiveness of security testing programs, particularly red teaming (simulated attacks to find vulnerabilities). The author suggests using existing frameworks like CMMI (Capability Maturity Model Integration, a system developed by Carnegie Mellon University that rates how well-organized software processes are on a scale of one to five) that can be adapted to evaluate offensive security programs.

Embrace The Red
08

Motivated Intruder - Red Teaming for Privacy!

securityprivacy
Jul 24, 2020

This article discusses red teaming techniques (testing methods where security professionals act as attackers to find weaknesses) that organizations can use to identify privacy issues in their systems and infrastructure. The author emphasizes that privacy violations often come from insider threats (employees or contractors with authorized access to sensitive data), and highlights the importance of regular privacy testing as required by regulations like GDPR (General Data Protection Regulation, which sets rules for protecting personal data in Europe). The article mentions the "Motivated Intruder" threat model, where an insider with access to anonymized datasets (data with identifying information supposedly removed) uses data science techniques to reidentify people and expose their identities.

Embrace The Red
09

CVE-2020-14621: Vulnerability in the Java SE, Java SE Embedded product of Oracle Java SE (component: JAXP). Supported versions that are

security
Jul 15, 2020

A vulnerability in Oracle Java SE's JAXP component (a tool for processing XML data) allows attackers to modify or delete data without authentication by sending malicious data through network protocols. The flaw affects multiple Java versions including 7u261, 8u251, 11.0.7, and 14.0.1, and has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 5.3.

NVD/CVE Database
10

Blast from the past: Cross Site Scripting on the AWS Console

security
Jul 1, 2020

A researcher discovered a persistent XSS (cross-site scripting, where an attacker injects malicious code into a web page that runs in other users' browsers) vulnerability in the AWS Console several years ago. The post documents how they found the bug, the techniques they used, and Amazon's response to the discovery.

Embrace The Red
Prev1...278279280281282283Next
critical

CVE-2026-0545: In mlflow/mlflow, the FastAPI job endpoints under `/ajax-api/3.0/jobs/*` are not protected by authentication or authoriz

CVE-2026-0545NVD/CVE DatabaseApr 3, 2026
Apr 3, 2026
critical

GHSA-3hfp-gqgh-xc5g: Axios supply chain attack - dependency in @lightdash/cli may resolve to compromised axios versions

GitHub Advisory DatabaseApr 2, 2026
Apr 2, 2026
critical

GHSA-6vh2-h83c-9294: PraisonAI: Python Sandbox Escape via str Subclass startswith() Override in execute_code

CVE-2026-34938GitHub Advisory DatabaseApr 1, 2026
Apr 1, 2026
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026