aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3431 items

Assuming Bias and Responsible AI

infonews
safetypolicy
Nov 24, 2020

AI and machine learning systems have caused serious problems in real-world situations, including Amazon's recruiting tool that discriminated against women, Microsoft's chatbot that became racist and sexist, IBM's cancer treatment recommendation system that doctors criticized, and Facebook's AI that made incorrect translations leading to someone's arrest. These examples show that AI systems can develop and spread biased predictions and failures with harmful consequences. The article highlights the importance of addressing bias when building and deploying AI systems responsibly.

Embrace The Red

Abusing Application Layer Gateways (NAT Slipstreaming)

infonews
security
Nov 24, 2020

NAT Slipstreaming is a technique where visiting a malicious website can punch a hole through your router's firewall by exploiting the Application Layer Gateway (ALG, a feature that helps protocols like SIP, Session Initiation Protocol, work with firewalls). The attack works because the ALG is designed to allow devices inside a network to open firewall ports, but an attacker can abuse this intended functionality.

CVE-2020-28975: svm_predict_values in svm.cpp in Libsvm v324, as used in scikit-learn 0.23.2 and other products, allows attackers to cau

highvulnerability
security
Nov 21, 2020
CVE-2020-28975

A vulnerability in Libsvm v324 (a machine learning library used by scikit-learn 0.23.2) allows attackers to crash a program by sending a specially crafted machine learning model with an extremely large value in the _n_support array, causing a segmentation fault (a type of crash where the program tries to access memory it shouldn't). The scikit-learn developers noted this only happens if an application violates the library's API by modifying private attributes.

Machine Learning Attack Series: Repudiation Threat and Auditing

infonews
securityresearch

Video: Building and breaking a machine learning system

infonews
securityresearch

Machine Learning Attack Series: Image Scaling Attacks

infonews
securityresearch

Leveraging the Blue Team's Endpoint Agent as C2

infonews
security
Oct 26, 2020

During a Red Team Operation (a simulated attack where security testers try to break into a company's systems), researchers discovered that Blue Team infrastructure, like endpoint agents (software that monitors and controls devices on a network), can be exploited for remote code execution (running commands on systems without authorization) if not properly protected. Companies often lack adequate security controls like MFA (multi-factor authentication, requiring multiple verification steps) and monitoring to prevent unauthorized access to these agents.

Machine Learning Attack Series: Adversarial Robustness Toolbox Basics

infonews
researchsecurity

CVE-2020-15266: In Tensorflow before version 2.4.0, when the `boxes` argument of `tf.image.crop_and_resize` has a very large value, the

lowvulnerability
security
Oct 21, 2020
CVE-2020-15266

TensorFlow versions before 2.4.0 have a bug in the `tf.image.crop_and_resize` function where very large values in the `boxes` argument are converted to NaN (a special floating point value meaning "not a number"), causing undefined behavior and a segmentation fault (a crash from illegal memory access). This vulnerability affects the CPU implementation of the function.

CVE-2020-15265: In Tensorflow before version 2.4.0, an attacker can pass an invalid `axis` value to `tf.quantization.quantize_and_dequan

mediumvulnerability
security
Oct 21, 2020
CVE-2020-15265

In TensorFlow before version 2.4.0, an attacker can provide an invalid `axis` parameter (a setting that specifies which dimension of data to work with) to a quantization function, causing the program to access memory outside the bounds of an array, which crashes the system. The vulnerability exists because the code only uses DCHECK (a debug-only validation that is disabled in normal builds) rather than proper runtime validation.

Hacking neural networks - so we don't get stuck in the matrix

infonews
securityresearch

What does an offensive security team actually do?

infonews
security
Oct 19, 2020

Offensive security teams are groups that test and challenge an organization's defenses by simulating attacks from an adversary's perspective. Rather than debating terminology like 'red team' or 'pentest' (security testing where authorized people attempt to break into systems), the source suggests defining these teams by the services they provide to customers within the organization, including business groups, defensive teams, developers, and employees.

CVE 2020-16977: VS Code Python Extension Remote Code Execution

highnews
security
Oct 14, 2020

The VS Code Python extension had a vulnerability where HTML and JavaScript code could be injected through error messages (called tracebacks, which show where a program failed) in Jupyter Notebooks, potentially allowing attackers to steal user information or take control of their computer. The vulnerability occurred because strings in error messages were not properly escaped (prevented from being interpreted as code), and could be triggered by modifying a notebook file directly or by having the notebook connect to a remote server controlled by an attacker.

Machine Learning Attack Series: Stealing a model file

mediumnews
security
Oct 10, 2020

Attackers can steal machine learning model files through direct approaches like compromising systems to find model files (often with .h5 extensions), or through indirect approaches like model stealing where attackers build similar models themselves. One specific attack vector involves SSH agent hijacking (exploiting SSH keys stored in memory on compromised machines), which allows attackers to access production systems containing model files without needing the original passphrases.

Coming up: Grayhat Red Team Village talk about hacking a machine learning system

infonews
securityresearch

CVE-2020-15214: In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger a write out bounds / segmentati

highvulnerability
security
Sep 25, 2020
CVE-2020-15214

TensorFlow Lite versions before 2.2.1 and 2.3.1 have a bug where the segment sum operation (a function that groups and sums data) crashes or causes memory corruption if the segment IDs (labels that organize the data) are not sorted in increasing order. The code incorrectly assumes the IDs are sorted, so it allocates too little memory, leading to a segmentation fault (a crash caused by accessing memory it shouldn't).

CVE-2020-15213: In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger a denial of service by causing

mediumvulnerability
security
Sep 25, 2020
CVE-2020-15213

TensorFlow Lite (a lightweight version of TensorFlow used on mobile and embedded devices) before versions 2.2.1 and 2.3.1 has a vulnerability where attackers can crash an application by making it try to allocate too much memory through the segment sum operation (a function that groups and sums data). The vulnerability works because the code uses the largest value in the input data to determine how much memory to request, so an attacker can provide a very large number to exhaust available memory.

CVE-2020-15212: In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of hea

highvulnerability
security
Sep 25, 2020
CVE-2020-15212

TensorFlow Lite versions before 2.2.1 and 2.3.1 have a vulnerability where negative values in the segment_ids tensor (an array of numbers used to group data) can cause the software to write data outside its allocated memory area, potentially crashing the program or corrupting memory. This vulnerability can be exploited by anyone who can modify the segment_ids data.

CVE-2020-15211: In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a do

mediumvulnerability
security
Sep 25, 2020
CVE-2020-15211

TensorFlow Lite (a machine learning framework for mobile devices) versions before 1.15.4, 2.0.3, 2.1.2, 2.2.1, and 2.3.1 have a vulnerability in how they validate saved models. The framework uses a special index value of -1 to mark optional inputs, but this value is incorrectly accepted for all operators and even output tensors, allowing attackers to read and write data outside the intended memory boundaries.

CVE-2020-15210: In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor a

mediumvulnerability
security
Sep 25, 2020
CVE-2020-15210

TensorFlow Lite (a machine learning framework for running AI models on mobile and embedded devices) versions before 1.15.4, 2.0.3, 2.1.2, 2.2.1, and 2.3.1 has a vulnerability where using the same tensor (a multi-dimensional array of data) as both input and output in an operation can cause a segmentation fault (a crash where the program tries to access memory it shouldn't) or memory corruption (where data in memory gets corrupted). This happens because the code doesn't properly validate inputs when a tensor is used in this way.

Previous158 / 172Next
Embrace The Red

Fix: A patch is available in scikit-learn at commit 1bf13d567d3cd74854aa8343fd25b61dd768bb85 on GitHub, as referenced in the source material.

NVD/CVE Database
Nov 10, 2020

Repudiation is a security threat where someone denies performing an action, such as replacing an AI model file with a malicious version. The source explains how to use auditd (a Linux auditing tool) and centralized monitoring systems like Splunk or Elastic Stack to create audit logs that track who accessed or modified files and when, helping prove or investigate whether specific accounts made changes.

Fix: To mitigate repudiation threats, the source recommends: (1) installing and configuring auditd on Linux using 'sudo apt install auditd', (2) adding file monitoring rules with auditctl (example: 'sudo auditctl -w /path/to/file -p rwa -k keyword' to audit read, write, and append operations), and (3) pushing audit logs to centralized monitoring systems such as Splunk, Elastic Stack, or Azure Sentinel for analysis and visualization.

Embrace The Red
Nov 5, 2020

This is a YouTube talk about building and breaking machine learning systems, presented at a security conference (GrayHat Red Team Village). The speaker is exploring whether to develop this content into a hands-on workshop where participants could practice these concepts.

Embrace The Red
Oct 28, 2020

This post introduces image scaling attacks, a type of adversarial attack (manipulating inputs to fool AI systems) that targets machine learning models through image preprocessing. The author discovered this attack concept while preparing demos and references academic research on understanding and preventing these attacks.

Embrace The Red
Embrace The Red
Oct 22, 2020

This post demonstrates how to use the Adversarial Robustness Toolbox (ART, an open-source library created by IBM for testing machine learning security) to generate adversarial examples, which are modified images designed to trick AI models into making wrong predictions. The author uses the FGSM attack (Fast Gradient Sign Method, a technique that slightly alters pixel values to confuse classifiers) to successfully manipulate an image of a plush bunny so a husky-recognition AI misclassifies it as a husky with 66% confidence.

Embrace The Red

Fix: Upgrade to TensorFlow version 2.4.0 or later, which contains the patch. TensorFlow nightly packages (development builds) after commit eccb7ec454e6617738554a255d77f08e60ee0808 also have the issue resolved.

NVD/CVE Database

Fix: The issue is patched in commit eccb7ec454e6617738554a255d77f08e60ee0808. Upgrade to TensorFlow 2.4.0 or later, or use TensorFlow nightly packages released after this commit.

NVD/CVE Database
Oct 20, 2020

This item is promotional content for a conference talk about attacking and defending machine learning systems, presented at GrayHat 2020's Red Team Village. The speaker created an introductory video for a session titled 'Learning by doing: Building and breaking a machine learning system,' scheduled for October 31st, 2020.

Embrace The Red
Embrace The Red

Fix: Microsoft Security Response Center (MSRC) confirmed the vulnerability and fixed it, with the fix released in October 2020 as documented in their security bulletin.

Embrace The Red
Embrace The Red
Oct 9, 2020

This is an announcement for a conference talk about attacking and defending machine learning systems, covering practical threats like brute forcing predictions (testing many inputs to guess outputs), perturbations (small changes to data that fool AI), and backdooring models (secretly poisoning training data). The speaker will discuss both ML-specific attacks and traditional security breaches, as well as defenses to protect these systems.

Embrace The Red

Fix: Upgrade to TensorFlow Lite version 2.2.1 or 2.3.1. As a partial workaround for cases where segment IDs are stored in the model file, add a custom Verifier to the model loading code to check that segment IDs are sorted; however, this workaround does not work if segment IDs are generated during inference (when the model is running), in which case upgrading to patched code is necessary.

NVD/CVE Database

Fix: Upgrade to TensorFlow versions 2.2.1 or 2.3.1. As a partial workaround (only if segment IDs are fixed in the model file), add a custom `Verifier` to limit the maximum value allowed in the segment IDs tensor. If segment IDs are generated during inference, similar validation can be added between inference steps. However, if segment IDs are generated as outputs of a tensor during inference, no workaround is possible and upgrading is required.

NVD/CVE Database

Fix: The issue is patched in TensorFlow versions 2.2.1 or 2.3.1. As a workaround for unpatched versions, users can add a custom Verifier (a validation tool) to the model loading code to check that all segment IDs are positive if they are stored in the model file, or add similar validation at runtime if they are generated during execution. However, if segment IDs are generated as outputs during inference, no workaround is available and upgrading to patched code is required.

NVD/CVE Database

Fix: Upgrade to TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. Alternatively, the source mentions a potential workaround: "add a custom Verifier to the model loading code to ensure that only operators which accept optional inputs use the -1 special value and only for the tensors that they expect to be optional," though the source advises that this approach "is erro-prone" and recommends upgrading instead.

NVD/CVE Database

Fix: Upgrade to TensorFlow Lite version 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. The issue was patched in commit d58c96946b.

NVD/CVE Database