aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,020
[LAST_24H]
2
[LAST_7D]
183
Daily BriefingSaturday, April 11, 2026
>

Anthropic's Claude Code Dominates Enterprise AI Conversation: At a major industry conference, Anthropic's coding agent (a tool that autonomously generates, edits, and reviews code) has eclipsed OpenAI as the focus among executives and investors, generating over $2.5 billion in annualized revenue since its May 2025 launch. The company's narrow focus on coding capabilities rather than product sprawl has accelerated enterprise adoption despite ongoing legal tensions with the Department of Defense.

>

Spotify Confronts Large-Scale AI Impersonation Campaign: AI-generated music is being uploaded to Spotify under the names of legitimate artists, including prominent musicians like Jason Moran and Drake, prompting the platform to remove over 75 million spammy tracks in the past year. Spotify is developing a pre-publication review tool that will allow artists to approve releases before they appear on the platform, addressing what amounts to identity fraud at scale.

Latest Intel

page 295/302
VIEW ALL
01

Machine Learning Attack Series: Repudiation Threat and Auditing

securityresearch
Critical This Week5 issues
critical

GHSA-8x8f-54wf-vv92: PraisonAI Browser Server allows unauthenticated WebSocket clients to hijack connected extension sessions

GitHub Advisory DatabaseApr 10, 2026
Apr 10, 2026
Nov 10, 2020

Repudiation is a security threat where someone denies performing an action, such as replacing an AI model file with a malicious version. The source explains how to use auditd (a Linux auditing tool) and centralized monitoring systems like Splunk or Elastic Stack to create audit logs that track who accessed or modified files and when, helping prove or investigate whether specific accounts made changes.

Fix: To mitigate repudiation threats, the source recommends: (1) installing and configuring auditd on Linux using 'sudo apt install auditd', (2) adding file monitoring rules with auditctl (example: 'sudo auditctl -w /path/to/file -p rwa -k keyword' to audit read, write, and append operations), and (3) pushing audit logs to centralized monitoring systems such as Splunk, Elastic Stack, or Azure Sentinel for analysis and visualization.

Embrace The Red
02

Video: Building and breaking a machine learning system

securityresearch
Nov 5, 2020

This is a YouTube talk about building and breaking machine learning systems, presented at a security conference (GrayHat Red Team Village). The speaker is exploring whether to develop this content into a hands-on workshop where participants could practice these concepts.

Embrace The Red
03

Machine Learning Attack Series: Image Scaling Attacks

securityresearch
Oct 28, 2020

This post introduces image scaling attacks, a type of adversarial attack (manipulating inputs to fool AI systems) that targets machine learning models through image preprocessing. The author discovered this attack concept while preparing demos and references academic research on understanding and preventing these attacks.

Embrace The Red
04

Machine Learning Attack Series: Adversarial Robustness Toolbox Basics

researchsecurity
Oct 22, 2020

This post demonstrates how to use the Adversarial Robustness Toolbox (ART, an open-source library created by IBM for testing machine learning security) to generate adversarial examples, which are modified images designed to trick AI models into making wrong predictions. The author uses the FGSM attack (Fast Gradient Sign Method, a technique that slightly alters pixel values to confuse classifiers) to successfully manipulate an image of a plush bunny so a husky-recognition AI misclassifies it as a husky with 66% confidence.

Embrace The Red
05

CVE-2020-15266: In Tensorflow before version 2.4.0, when the `boxes` argument of `tf.image.crop_and_resize` has a very large value, the

security
Oct 21, 2020

TensorFlow versions before 2.4.0 have a bug in the `tf.image.crop_and_resize` function where very large values in the `boxes` argument are converted to NaN (a special floating point value meaning "not a number"), causing undefined behavior and a segmentation fault (a crash from illegal memory access). This vulnerability affects the CPU implementation of the function.

Fix: Upgrade to TensorFlow version 2.4.0 or later, which contains the patch. TensorFlow nightly packages (development builds) after commit eccb7ec454e6617738554a255d77f08e60ee0808 also have the issue resolved.

NVD/CVE Database
06

CVE-2020-15265: In Tensorflow before version 2.4.0, an attacker can pass an invalid `axis` value to `tf.quantization.quantize_and_dequan

security
Oct 21, 2020

In TensorFlow before version 2.4.0, an attacker can provide an invalid `axis` parameter (a setting that specifies which dimension of data to work with) to a quantization function, causing the program to access memory outside the bounds of an array, which crashes the system. The vulnerability exists because the code only uses DCHECK (a debug-only validation that is disabled in normal builds) rather than proper runtime validation.

Fix: The issue is patched in commit eccb7ec454e6617738554a255d77f08e60ee0808. Upgrade to TensorFlow 2.4.0 or later, or use TensorFlow nightly packages released after this commit.

NVD/CVE Database
07

Hacking neural networks - so we don't get stuck in the matrix

securityresearch
Oct 20, 2020

This item is promotional content for a conference talk about attacking and defending machine learning systems, presented at GrayHat 2020's Red Team Village. The speaker created an introductory video for a session titled 'Learning by doing: Building and breaking a machine learning system,' scheduled for October 31st, 2020.

Embrace The Red
08

CVE 2020-16977: VS Code Python Extension Remote Code Execution

security
Oct 14, 2020

The VS Code Python extension had a vulnerability where HTML and JavaScript code could be injected through error messages (called tracebacks, which show where a program failed) in Jupyter Notebooks, potentially allowing attackers to steal user information or take control of their computer. The vulnerability occurred because strings in error messages were not properly escaped (prevented from being interpreted as code), and could be triggered by modifying a notebook file directly or by having the notebook connect to a remote server controlled by an attacker.

Fix: Microsoft Security Response Center (MSRC) confirmed the vulnerability and fixed it, with the fix released in October 2020 as documented in their security bulletin.

Embrace The Red
09

Machine Learning Attack Series: Stealing a model file

security
Oct 10, 2020

Attackers can steal machine learning model files through direct approaches like compromising systems to find model files (often with .h5 extensions), or through indirect approaches like model stealing where attackers build similar models themselves. One specific attack vector involves SSH agent hijacking (exploiting SSH keys stored in memory on compromised machines), which allows attackers to access production systems containing model files without needing the original passphrases.

Embrace The Red
10

Coming up: Grayhat Red Team Village talk about hacking a machine learning system

securityresearch
Oct 9, 2020

This is an announcement for a conference talk about attacking and defending machine learning systems, covering practical threats like brute forcing predictions (testing many inputs to guess outputs), perturbations (small changes to data that fool AI), and backdooring models (secretly poisoning training data). The speaker will discuss both ML-specific attacks and traditional security breaches, as well as defenses to protect these systems.

Embrace The Red
Prev1...293294295296297...302Next
critical

CVE-2026-40111: PraisonAIAgents is a multi-agent teams system. Prior to 1.5.128, he memory hooks executor in praisonaiagents passes a us

CVE-2026-40111NVD/CVE DatabaseApr 9, 2026
Apr 9, 2026
critical

GHSA-2763-cj5r-c79m: PraisonAI Vulnerable to OS Command Injection

GitHub Advisory DatabaseApr 8, 2026
Apr 8, 2026
critical

GHSA-qf73-2hrx-xprp: PraisonAI has sandbox escape via exception frame traversal in `execute_code` (subprocess mode)

CVE-2026-39888GitHub Advisory DatabaseApr 8, 2026
Apr 8, 2026
critical

Hackers exploit a critical Flowise flaw affecting thousands of AI workflows

CSO OnlineApr 8, 2026
Apr 8, 2026