aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,020
[LAST_24H]
3
[LAST_7D]
183
Daily BriefingSaturday, April 11, 2026
>

Anthropic's Claude Code Dominates Enterprise AI Conversation: At a major industry conference, Anthropic's coding agent (a tool that autonomously generates, edits, and reviews code) has eclipsed OpenAI as the focus among executives and investors, generating over $2.5 billion in annualized revenue since its May 2025 launch. The company's narrow focus on coding capabilities rather than product sprawl has accelerated enterprise adoption despite ongoing legal tensions with the Department of Defense.

>

Spotify Confronts Large-Scale AI Impersonation Campaign: AI-generated music is being uploaded to Spotify under the names of legitimate artists, including prominent musicians like Jason Moran and Drake, prompting the platform to remove over 75 million spammy tracks in the past year. Spotify is developing a pre-publication review tool that will allow artists to approve releases before they appear on the platform, addressing what amounts to identity fraud at scale.

Latest Intel

page 299/302
VIEW ALL
01

Threat modeling a machine learning system

securityresearch
Critical This Week5 issues
critical

GHSA-8x8f-54wf-vv92: PraisonAI Browser Server allows unauthenticated WebSocket clients to hijack connected extension sessions

GitHub Advisory DatabaseApr 10, 2026
Apr 10, 2026
Sep 6, 2020

This post explains threat modeling for machine learning systems, which is a process to systematically identify potential security attacks. The author uses Microsoft's Threat Modeling tool and STRIDE (a framework categorizing threats into spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege) to identify vulnerabilities in a machine learning system called 'Husky AI', and notes that perturbation attacks (where attackers query the model to trick it into making wrong predictions) are a particular concern for ML systems.

Embrace The Red
02

MLOps - Operationalizing the machine learning model

research
Sep 5, 2020

Operationalizing an ML model (putting it into production so it can be used by real applications) involves deploying the trained model to a web server so it can make predictions. The author found that integrating TensorFlow (a popular ML framework) with Golang was unexpectedly complicated, so they chose Python instead for their web server.

Embrace The Red
03

Husky AI: Building a machine learning system

research
Sep 4, 2020

This post describes how the author built Husky AI, a machine learning system that classifies images as huskies or non-huskies, using a convolutional neural network (CNN, a type of AI model designed to process images). The author gathered about 1,300 husky images and 3,000 other images using Bing Image Search, then organized them into separate training and validation folders to build and test the model. The post notes a potential security risk: attackers could poison either the training or validation image sets to cause the model to perform poorly.

Embrace The Red
04

The machine learning pipeline and attacks

researchsecurity
Sep 2, 2020

This post introduces the machine learning pipeline, which consists of sequential steps from collecting training images, pre-processing data, defining and training a model, evaluating performance, and finally deploying it to production as an API (application programming interface, a way for software to communicate). The author uses a "Husky AI" example application that identifies whether uploaded images contain huskies, and explains that understanding this pipeline's components is important for identifying potential security attacks on machine learning systems.

Embrace The Red
05

Getting the hang of machine learning

securityresearch
Sep 1, 2020

A security researcher describes their year-long study of machine learning and AI fundamentals, with the goal of understanding how to build and then attack ML systems. The post outlines their learning approach, courses, and materials for others interested in starting adversarial machine learning (attacking ML systems).

Embrace The Red
06

Race conditions when applying ACLs

security
Aug 24, 2020

Race conditions in ACL (access control list, the rules that determine who can access files) application occur when a system creates a sensitive file but there is a time gap before permissions are applied to protect it, potentially allowing attackers to access the file during that window. This type of vulnerability exploits the timing between file creation and permission lockdown to expose sensitive information.

Embrace The Red
07

Red Teaming Telemetry Systems

securitysafety
Aug 12, 2020

Telemetry (data collected about how users interact with software) is often used by companies to make business decisions, but telemetry pipelines (the systems that collect and process this data) can be vulnerable to attacks. A red team security test demonstrated this by spoofing telemetry requests to falsely show a Commodore 64 as the most popular operating system, which could mislead companies into making poor decisions based on fake usage data.

Fix: The source mentions that internal red teams should run security assessments of telemetry pipelines. According to the text, this ensures that 'pipelines are assessed and proper sanitization, sanity checks, input validation for telemetry data is in place.' However, no specific technical fix, patch version, or concrete implementation details are provided.

Embrace The Red
08

Illusion of Control: Capability Maturity Models and Red Teaming

security
Jul 31, 2020

This article discusses how to measure the maturity and effectiveness of security testing programs, particularly red teaming (simulated attacks to find vulnerabilities). The author suggests using existing frameworks like CMMI (Capability Maturity Model Integration, a system developed by Carnegie Mellon University that rates how well-organized software processes are on a scale of one to five) that can be adapted to evaluate offensive security programs.

Embrace The Red
09

Motivated Intruder - Red Teaming for Privacy!

securityprivacy
Jul 24, 2020

This article discusses red teaming techniques (testing methods where security professionals act as attackers to find weaknesses) that organizations can use to identify privacy issues in their systems and infrastructure. The author emphasizes that privacy violations often come from insider threats (employees or contractors with authorized access to sensitive data), and highlights the importance of regular privacy testing as required by regulations like GDPR (General Data Protection Regulation, which sets rules for protecting personal data in Europe). The article mentions the "Motivated Intruder" threat model, where an insider with access to anonymized datasets (data with identifying information supposedly removed) uses data science techniques to reidentify people and expose their identities.

Embrace The Red
10

CVE-2020-14621: Vulnerability in the Java SE, Java SE Embedded product of Oracle Java SE (component: JAXP). Supported versions that are

security
Jul 15, 2020

A vulnerability in Oracle Java SE's JAXP component (a tool for processing XML data) allows attackers to modify or delete data without authentication by sending malicious data through network protocols. The flaw affects multiple Java versions including 7u261, 8u251, 11.0.7, and 14.0.1, and has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 5.3.

NVD/CVE Database
Prev1...297298299300301302Next
critical

CVE-2026-40111: PraisonAIAgents is a multi-agent teams system. Prior to 1.5.128, he memory hooks executor in praisonaiagents passes a us

CVE-2026-40111NVD/CVE DatabaseApr 9, 2026
Apr 9, 2026
critical

GHSA-2763-cj5r-c79m: PraisonAI Vulnerable to OS Command Injection

GitHub Advisory DatabaseApr 8, 2026
Apr 8, 2026
critical

GHSA-qf73-2hrx-xprp: PraisonAI has sandbox escape via exception frame traversal in `execute_code` (subprocess mode)

CVE-2026-39888GitHub Advisory DatabaseApr 8, 2026
Apr 8, 2026
critical

Hackers exploit a critical Flowise flaw affecting thousands of AI workflows

CSO OnlineApr 8, 2026
Apr 8, 2026