aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1290 items

CVE 2020-16977: VS Code Python Extension Remote Code Execution

highnews
security
Oct 14, 2020

The VS Code Python extension had a vulnerability where HTML and JavaScript code could be injected through error messages (called tracebacks, which show where a program failed) in Jupyter Notebooks, potentially allowing attackers to steal user information or take control of their computer. The vulnerability occurred because strings in error messages were not properly escaped (prevented from being interpreted as code), and could be triggered by modifying a notebook file directly or by having the notebook connect to a remote server controlled by an attacker.

Fix: Microsoft Security Response Center (MSRC) confirmed the vulnerability and fixed it, with the fix released in October 2020 as documented in their security bulletin.

Embrace The Red

Machine Learning Attack Series: Stealing a model file

mediumnews
security
Oct 10, 2020

Attackers can steal machine learning model files through direct approaches like compromising systems to find model files (often with .h5 extensions), or through indirect approaches like model stealing where attackers build similar models themselves. One specific attack vector involves SSH agent hijacking (exploiting SSH keys stored in memory on compromised machines), which allows attackers to access production systems containing model files without needing the original passphrases.

Coming up: Grayhat Red Team Village talk about hacking a machine learning system

infonews
securityresearch

Beware of the Shadowbunny - Using virtual machines to persist and evade detections

infonews
security
Sep 23, 2020

This item describes a presentation about 'Shadowbunny,' a technique that uses virtual machines (software that simulates a complete computer inside another computer) to hide malware and avoid detection by security tools. The content provided is primarily background information about the presentation's origin and does not detail the actual technical attack or defense mechanisms.

Participating in the Microsoft Machine Learning Security Evasion Competition - Bypassing malware models by signing binaries

infonews
securityresearch

Machine Learning Attack Series: Backdooring models

infonews
securityresearch

Machine Learning Attack Series: Perturbations to misclassify existing images

infonews
securityresearch

Machine Learning Attack Series: Smart brute forcing

infonews
securityresearch

Machine Learning Attack Series: Brute forcing images to find incorrect predictions

infonews
researchsecurity

Threat modeling a machine learning system

infonews
securityresearch

MLOps - Operationalizing the machine learning model

infonews
research
Sep 5, 2020

Operationalizing an ML model (putting it into production so it can be used by real applications) involves deploying the trained model to a web server so it can make predictions. The author found that integrating TensorFlow (a popular ML framework) with Golang was unexpectedly complicated, so they chose Python instead for their web server.

Husky AI: Building a machine learning system

infonews
research
Sep 4, 2020

This post describes how the author built Husky AI, a machine learning system that classifies images as huskies or non-huskies, using a convolutional neural network (CNN, a type of AI model designed to process images). The author gathered about 1,300 husky images and 3,000 other images using Bing Image Search, then organized them into separate training and validation folders to build and test the model. The post notes a potential security risk: attackers could poison either the training or validation image sets to cause the model to perform poorly.

The machine learning pipeline and attacks

infonews
researchsecurity

Getting the hang of machine learning

infonews
securityresearch

Beware of the Shadowbunny! at BSides Singapore

infonews
security
Aug 28, 2020

A security researcher will present on Shadowbunny, a technique that misuses virtual machines (software that simulates a computer) during lateral movement (when an attacker spreads from one compromised system to another). The presentation will also discuss threat hunting (searching for signs of attacks) and detection methods to identify this technique.

Race conditions when applying ACLs

infonews
security
Aug 24, 2020

Race conditions in ACL (access control list, the rules that determine who can access files) application occur when a system creates a sensitive file but there is a time gap before permissions are applied to protect it, potentially allowing attackers to access the file during that window. This type of vulnerability exploits the timing between file creation and permission lockdown to expose sensitive information.

Red Teaming Telemetry Systems

infonews
securitysafety

Illusion of Control: Capability Maturity Models and Red Teaming

infonews
security
Jul 31, 2020

This article discusses how to measure the maturity and effectiveness of security testing programs, particularly red teaming (simulated attacks to find vulnerabilities). The author suggests using existing frameworks like CMMI (Capability Maturity Model Integration, a system developed by Carnegie Mellon University that rates how well-organized software processes are on a scale of one to five) that can be adapted to evaluate offensive security programs.

Motivated Intruder - Red Teaming for Privacy!

infonews
securityprivacy

Firefox - Debugger Client for Cookie Access

infonews
security
Jul 21, 2020

A researcher created a tool that uses Firefox's debugging API (a set of commands for controlling Firefox remotely) to extract cookies (small files that store login information and preferences) from the browser, which is useful when an attacker doesn't have administrator access or user credentials. The tool works by connecting to Firefox's debug server, sending JavaScript commands to access the Services.cookies.cookies array, and retrieving the results, though it requires the debugging feature to be manually enabled first.

Previous63 / 65Next
Embrace The Red
Oct 9, 2020

This is an announcement for a conference talk about attacking and defending machine learning systems, covering practical threats like brute forcing predictions (testing many inputs to guess outputs), perturbations (small changes to data that fool AI), and backdooring models (secretly poisoning training data). The speaker will discuss both ML-specific attacks and traditional security breaches, as well as defenses to protect these systems.

Embrace The Red
Embrace The Red
Sep 22, 2020

This article describes a participant's experience in Microsoft and CUJO AI's Machine Learning Security Evasion Competition, where the goal was to modify malware samples to bypass machine learning models (AI systems trained to detect malicious files) while keeping them functional. The participant attempted two main evasion techniques: hiding data in binaries using steganography (concealing information within files), which had minimal impact, and signing binaries with fake Microsoft certificates using Authenticode (a digital signature system that verifies software authenticity), which showed more promise.

Embrace The Red
Sep 18, 2020

This post discusses backdooring attacks on machine learning models, where an adversary gains access to a model file (the trained AI system used in production) and overwrites it with malicious code. The threat was identified during threat modeling, which is a security planning process where teams imagine potential attacks to prepare defenses. The post indicates it will cover attacks, mitigations, and how Husky AI was built to address this risk.

Embrace The Red
Sep 16, 2020

This post discusses a machine learning attack technique where researchers modify existing images through small changes (perturbations, or slight adjustments to pixels) to trick an AI model into misclassifying them. For example, they aim to alter a picture of a plush bunny so that an image recognition model incorrectly identifies it as a husky dog.

Embrace The Red
Sep 13, 2020

This post is part of a series about machine learning security attacks, with sections covering how an AI system called Husky AI was built and threat-modeled, plus investigations into attacks against it. The previous post demonstrated basic techniques to fool an image recognition model (a type of AI trained to identify what's in pictures) by generating images with solid colors or random pixels.

Embrace The Red
Sep 9, 2020

A researcher tested a machine learning model called Husky AI by creating simple test images (all black, all white, and random pixels) and sending them through an HTTP API to see if the model would make incorrect predictions. The white canvas image successfully tricked the model into incorrectly classifying it as a husky, demonstrating a perturbation attack (where slightly modified or unusual inputs fool an AI into making wrong predictions).

Embrace The Red
Sep 6, 2020

This post explains threat modeling for machine learning systems, which is a process to systematically identify potential security attacks. The author uses Microsoft's Threat Modeling tool and STRIDE (a framework categorizing threats into spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege) to identify vulnerabilities in a machine learning system called 'Husky AI', and notes that perturbation attacks (where attackers query the model to trick it into making wrong predictions) are a particular concern for ML systems.

Embrace The Red
Embrace The Red
Embrace The Red
Sep 2, 2020

This post introduces the machine learning pipeline, which consists of sequential steps from collecting training images, pre-processing data, defining and training a model, evaluating performance, and finally deploying it to production as an API (application programming interface, a way for software to communicate). The author uses a "Husky AI" example application that identifies whether uploaded images contain huskies, and explains that understanding this pipeline's components is important for identifying potential security attacks on machine learning systems.

Embrace The Red
Sep 1, 2020

A security researcher describes their year-long study of machine learning and AI fundamentals, with the goal of understanding how to build and then attack ML systems. The post outlines their learning approach, courses, and materials for others interested in starting adversarial machine learning (attacking ML systems).

Embrace The Red
Embrace The Red
Embrace The Red
Aug 12, 2020

Telemetry (data collected about how users interact with software) is often used by companies to make business decisions, but telemetry pipelines (the systems that collect and process this data) can be vulnerable to attacks. A red team security test demonstrated this by spoofing telemetry requests to falsely show a Commodore 64 as the most popular operating system, which could mislead companies into making poor decisions based on fake usage data.

Fix: The source mentions that internal red teams should run security assessments of telemetry pipelines. According to the text, this ensures that 'pipelines are assessed and proper sanitization, sanity checks, input validation for telemetry data is in place.' However, no specific technical fix, patch version, or concrete implementation details are provided.

Embrace The Red
Embrace The Red
Jul 24, 2020

This article discusses red teaming techniques (testing methods where security professionals act as attackers to find weaknesses) that organizations can use to identify privacy issues in their systems and infrastructure. The author emphasizes that privacy violations often come from insider threats (employees or contractors with authorized access to sensitive data), and highlights the importance of regular privacy testing as required by regulations like GDPR (General Data Protection Regulation, which sets rules for protecting personal data in Europe). The article mentions the "Motivated Intruder" threat model, where an insider with access to anonymized datasets (data with identifying information supposedly removed) uses data science techniques to reidentify people and expose their identities.

Embrace The Red
Embrace The Red