All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
TensorFlow versions before 2.2.1 and 2.3.1 have a memory leak (wasted computer memory that isn't freed) when users pass a list of strings to a function called `dlpack.to_dlpack`. The bug happens because the code doesn't properly check for error conditions during validation, so it continues running even when it should stop and clean up.
Fix: Update TensorFlow to version 2.2.1 or 2.3.1, which include the fix released in commit 22e07fb204386768e5bcbea563641ea11f96ceb8.
NVD/CVE DatabaseTensorFlow versions before 2.2.1 and 2.3.1 have a bug where invalid arguments to `dlpack.to_dlpack` (a function that converts data between formats) cause the code to create null pointers (memory references that point to nothing) without properly checking for errors. This can lead to the program crashing or behaving unpredictably when it tries to use these invalid pointers.
TensorFlow versions before 1.15.4, 2.0.3, 2.1.2, 2.2.1, and 2.3.1 have a bug in the `tf.raw_ops.Switch` operation where it tries to access a null pointer (a reference to nothing), causing the program to crash. The problem occurs because the operation outputs two tensors (data structures in machine learning frameworks) but only one is actually created, leaving the other as an undefined reference that shouldn't be accessed.
This item describes a presentation about 'Shadowbunny,' a technique that uses virtual machines (software that simulates a complete computer inside another computer) to hide malware and avoid detection by security tools. The content provided is primarily background information about the presentation's origin and does not detail the actual technical attack or defense mechanisms.
CVE-2020-14338 is a flaw in Wildfly's XML processing component where the XMLSchemaValidator class doesn't properly enforce a security feature called "use-grammar-pool-only," allowing a specially-crafted XML file to bypass validation checks. This vulnerability affects all Xerces JBoss versions before 2.12.0.SP3 and is related to a similar flaw found in OpenJDK.
The Note app pre-installed on KaiOS 2.5 is vulnerable to HTML and JavaScript injection (a type of attack where malicious code is inserted into an application). A local attacker (someone with access to the device) can inject harmful code into the Note app to take over its interface, trick users into giving up login credentials, or exploit any permissions the app has.
KaiOS 2.5's pre-installed Recorder app has a vulnerability allowing HTML and JavaScript injection (inserting malicious code into a web application), where a local attacker (someone with access to the device) can inject harmful code to take over the app's interface or trick users into revealing credentials.
KaiOS versions 1.0, 2.5, and 2.5.1 contain a vulnerability in their built-in Radio app that allows HTML and JavaScript injection (code inserted into a program to make it behave unexpectedly). An attacker with local access to the device could inject malicious code to manipulate the app's interface, trick users into revealing passwords, or exploit any permissions the Radio app has been granted.
KaiOS versions 2.5 and 2.5.1 contain a vulnerability in the File Manager app where attackers can inject HTML and JavaScript (code that runs in web browsers) through malicious email attachments. This allows attackers to manipulate the app's interface, trick users into revealing login credentials, or exploit the app's permissions.
KaiOS 2.5 and 2.5.1's Contacts app is vulnerable to HTML and JavaScript injection (inserting malicious code into a web application), where an attacker can send a specially crafted vCard file (a contact format) that, when imported, executes their code within the app. This lets attackers manipulate what users see, steal credentials by displaying fake login prompts, or exploit the app's permissions to access sensitive device features.
KaiOS versions 1.0, 2.5, and 2.5.12.5 have a vulnerability in their pre-installed Email app that allows HTML and JavaScript injection (inserting malicious code into a webpage or application). An attacker can send a specially crafted email that injects harmful code into the email app's interface when opened, potentially letting them trick users into revealing passwords or access the app's permissions.
Operationalizing an ML model (putting it into production so it can be used by real applications) involves deploying the trained model to a web server so it can make predictions. The author found that integrating TensorFlow (a popular ML framework) with Golang was unexpectedly complicated, so they chose Python instead for their web server.
This post describes how the author built Husky AI, a machine learning system that classifies images as huskies or non-huskies, using a convolutional neural network (CNN, a type of AI model designed to process images). The author gathered about 1,300 husky images and 3,000 other images using Bing Image Search, then organized them into separate training and validation folders to build and test the model. The post notes a potential security risk: attackers could poison either the training or validation image sets to cause the model to perform poorly.
Fix: Update TensorFlow to version 2.2.1 or 2.3.1, which contain the patch for this issue.
NVD/CVE DatabaseFix: Update to TensorFlow version 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1 or later. The issue is patched in commit da8558533d925694483d2c136a9220d6d49d843c.
NVD/CVE DatabaseThis article describes a participant's experience in Microsoft and CUJO AI's Machine Learning Security Evasion Competition, where the goal was to modify malware samples to bypass machine learning models (AI systems trained to detect malicious files) while keeping them functional. The participant attempted two main evasion techniques: hiding data in binaries using steganography (concealing information within files), which had minimal impact, and signing binaries with fake Microsoft certificates using Authenticode (a digital signature system that verifies software authenticity), which showed more promise.
This post discusses backdooring attacks on machine learning models, where an adversary gains access to a model file (the trained AI system used in production) and overwrites it with malicious code. The threat was identified during threat modeling, which is a security planning process where teams imagine potential attacks to prepare defenses. The post indicates it will cover attacks, mitigations, and how Husky AI was built to address this risk.
This post discusses a machine learning attack technique where researchers modify existing images through small changes (perturbations, or slight adjustments to pixels) to trick an AI model into misclassifying them. For example, they aim to alter a picture of a plush bunny so that an image recognition model incorrectly identifies it as a husky dog.
This post is part of a series about machine learning security attacks, with sections covering how an AI system called Husky AI was built and threat-modeled, plus investigations into attacks against it. The previous post demonstrated basic techniques to fool an image recognition model (a type of AI trained to identify what's in pictures) by generating images with solid colors or random pixels.
A researcher tested a machine learning model called Husky AI by creating simple test images (all black, all white, and random pixels) and sending them through an HTTP API to see if the model would make incorrect predictions. The white canvas image successfully tricked the model into incorrectly classifying it as a husky, demonstrating a perturbation attack (where slightly modified or unusual inputs fool an AI into making wrong predictions).
This post explains threat modeling for machine learning systems, which is a process to systematically identify potential security attacks. The author uses Microsoft's Threat Modeling tool and STRIDE (a framework categorizing threats into spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege) to identify vulnerabilities in a machine learning system called 'Husky AI', and notes that perturbation attacks (where attackers query the model to trick it into making wrong predictions) are a particular concern for ML systems.
This post introduces the machine learning pipeline, which consists of sequential steps from collecting training images, pre-processing data, defining and training a model, evaluating performance, and finally deploying it to production as an API (application programming interface, a way for software to communicate). The author uses a "Husky AI" example application that identifies whether uploaded images contain huskies, and explains that understanding this pipeline's components is important for identifying potential security attacks on machine learning systems.