aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1289 items

Broken NFT standards

infonews
security
Mar 19, 2021

NFT standards like EIP 721 and EIP 1152 have a critical flaw: they don't require a cryptographic hash (a unique digital fingerprint) linking the actual content to the blockchain entry, so the blockchain only proves you own a token ID, not the actual digital asset it claims to represent. This means metadata files and images can be stored centrally (for example, on Amazon S3), modified by anyone with access, or disappear entirely, leaving you unable to prove ownership of the original content even if you have a local copy.

Embrace The Red

Hong Kong InfoSec Summit 2021 Talk - The adversary will come to your house!

infonews
security
Mar 3, 2021

A speaker announced they would present at the Hong Kong Information Security Summit 2021 on March 9th, sharing insights on protecting modern remote workplaces from a red teaming perspective (the practice of simulating attacks to test security defenses). The talk, titled 'Red Team Strategies for Helping Protect the Modern Workplace,' focuses on security strategies relevant to distributed work environments.

An alternative perspective on the death of manual red teaming

infonews
securitysafety

Cybersecurity Attacks - Red Team Strategies Kindle Edition for free

infonews
security
Feb 4, 2021

This is a disclaimer page for educational material about red team strategies (methods used by authorized security testers to find vulnerabilities by simulating attacks). The content emphasizes that penetration testing (authorized attempts to break into systems to find security weaknesses) must have proper permission before being performed.

Team A and Team B: Sunburst, Teardrop and Raindrop

infonews
security
Feb 2, 2021

Microsoft analyzed the Sunburst attack (a major 2020 breach targeting SolarWinds software) and found that attackers used Cobalt Strike (a tool for command and control, letting attackers remotely direct compromised systems) alongside custom modifications to hide their backdoors in software. The attackers made each compromised system unique with different names and folder locations to avoid detection.

Survivorship Bias and Red Teaming

infonews
securityresearch

Gamifying Security with Red Team Scores

infonews
security
Jan 11, 2021

This article describes a method for creating security scores that compare different teams or services based on their security issues, helping organizations identify which areas need the most attention. The scoring system uses a multiplier (a scaling factor that makes severe issues count for much more than minor ones) to weight critical bugs more heavily than lower-severity ones, then sums these weighted values into a single score that can be displayed on a dashboard. By showing these scores to management, teams can have discussions about why some services have worse scores than others, which encourages improvements in security practices.

Actively protecting pen testers and pen testing assets

infonews
security
Dec 8, 2020

FireEye, a major security company, was attacked and adversaries accessed their internal red teaming tools (software used to test security by simulating attacks). The post warns that red teams are attractive targets for attackers and recommends implementing protective measures like honeypot machines (fake systems designed to detect intruders) and monitoring login attempts to quickly detect when attackers are trying to compromise their systems.

Machine Learning Attack Series: Overview

infonews
securityresearch

Machine Learning Attack Series: Generative Adversarial Networks (GANs)

infonews
securityresearch

Assuming Bias and Responsible AI

infonews
safetypolicy

Abusing Application Layer Gateways (NAT Slipstreaming)

infonews
security
Nov 24, 2020

NAT Slipstreaming is a technique where visiting a malicious website can punch a hole through your router's firewall by exploiting the Application Layer Gateway (ALG, a feature that helps protocols like SIP, Session Initiation Protocol, work with firewalls). The attack works because the ALG is designed to allow devices inside a network to open firewall ports, but an attacker can abuse this intended functionality.

Machine Learning Attack Series: Repudiation Threat and Auditing

infonews
securityresearch

Video: Building and breaking a machine learning system

infonews
securityresearch

Machine Learning Attack Series: Image Scaling Attacks

infonews
securityresearch

Leveraging the Blue Team's Endpoint Agent as C2

infonews
security
Oct 26, 2020

During a Red Team Operation (a simulated attack where security testers try to break into a company's systems), researchers discovered that Blue Team infrastructure, like endpoint agents (software that monitors and controls devices on a network), can be exploited for remote code execution (running commands on systems without authorization) if not properly protected. Companies often lack adequate security controls like MFA (multi-factor authentication, requiring multiple verification steps) and monitoring to prevent unauthorized access to these agents.

Machine Learning Attack Series: Adversarial Robustness Toolbox Basics

infonews
researchsecurity

Hacking neural networks - so we don't get stuck in the matrix

infonews
securityresearch

What does an offensive security team actually do?

infonews
security
Oct 19, 2020

Offensive security teams are groups that test and challenge an organization's defenses by simulating attacks from an adversary's perspective. Rather than debating terminology like 'red team' or 'pentest' (security testing where authorized people attempt to break into systems), the source suggests defining these teams by the services they provide to customers within the organization, including business groups, defensive teams, developers, and employees.

CVE 2020-16977: VS Code Python Extension Remote Code Execution

highnews
security
Oct 14, 2020

The VS Code Python extension had a vulnerability where HTML and JavaScript code could be injected through error messages (called tracebacks, which show where a program failed) in Jupyter Notebooks, potentially allowing attackers to steal user information or take control of their computer. The vulnerability occurred because strings in error messages were not properly escaped (prevented from being interpreted as code), and could be triggered by modifying a notebook file directly or by having the notebook connect to a remote server controlled by an attacker.

Previous62 / 65Next
Embrace The Red
Feb 8, 2021

This article argues against the idea that manual red teaming (the practice of simulating attacks to find security weaknesses) is dying due to automation. The author contends that red teaming is fundamentally about discovering unknown vulnerabilities and exploring creative attack strategies rather than just exploiting known bugs, and therefore cannot be fully automated even though adversaries will continue using AI and automation tools to scale their operations.

Embrace The Red
Embrace The Red
Embrace The Red
Jan 22, 2021

Survivorship bias is the logical error of focusing only on successes while ignoring failures, which can lead to incomplete understanding. The article applies this concept to red teaming (security testing where a team acts as attackers to find vulnerabilities) by noting that the MITRE ATT&CK framework (a database of known adversary tactics and techniques) only covers publicly disclosed threats, potentially causing security teams to overlook attack methods that haven't been publicly documented or aren't in the framework.

Embrace The Red
Embrace The Red

Fix: The source explicitly recommends several protective measures: (1) Create honeypot machines with fake credentials and trigger notifications and alerts when accessed; (2) Set up notifications for logon attempts and successful logons via email and forward events to the blue team (defensive security team); (3) Disable remote management endpoints and allow list source IP addresses in the firewall; (4) Lock down machines by blocking all inbound connections while allowing outbound ones using Windows command 'netsh advfirewall set allprofiles firewallpolicy blockinboundalways,allowoutbound' or Linux commands 'sudo ufw enable', 'sudo ufw default deny incoming', and 'sudo ufw default allow outgoing'; (5) Perform red vs. red testing (security assessments where one red team tests another) to verify the red team has proper security controls in place.

Embrace The Red
Nov 26, 2020

This is an index page summarizing a series of blog posts about machine learning security from a red teaming perspective (testing a system by simulating attacker behavior). The posts cover ML basics, threat modeling, practical attacks like adversarial examples (inputs designed to fool AI models), model theft, backdoors (hidden malicious code inserted into models), and how traditional security attacks (like weak access control) also threaten AI systems.

Embrace The Red
Nov 25, 2020

This post describes how Generative Adversarial Networks (GANs, a type of AI system where two neural networks compete to create realistic fake images) can be used to generate fake husky photos that trick an image recognition system called Husky AI into misclassifying them as real huskies. The author explains they investigated this attack method and references a GAN course to learn more about the technique.

Embrace The Red
Nov 24, 2020

AI and machine learning systems have caused serious problems in real-world situations, including Amazon's recruiting tool that discriminated against women, Microsoft's chatbot that became racist and sexist, IBM's cancer treatment recommendation system that doctors criticized, and Facebook's AI that made incorrect translations leading to someone's arrest. These examples show that AI systems can develop and spread biased predictions and failures with harmful consequences. The article highlights the importance of addressing bias when building and deploying AI systems responsibly.

Embrace The Red
Embrace The Red
Nov 10, 2020

Repudiation is a security threat where someone denies performing an action, such as replacing an AI model file with a malicious version. The source explains how to use auditd (a Linux auditing tool) and centralized monitoring systems like Splunk or Elastic Stack to create audit logs that track who accessed or modified files and when, helping prove or investigate whether specific accounts made changes.

Fix: To mitigate repudiation threats, the source recommends: (1) installing and configuring auditd on Linux using 'sudo apt install auditd', (2) adding file monitoring rules with auditctl (example: 'sudo auditctl -w /path/to/file -p rwa -k keyword' to audit read, write, and append operations), and (3) pushing audit logs to centralized monitoring systems such as Splunk, Elastic Stack, or Azure Sentinel for analysis and visualization.

Embrace The Red
Nov 5, 2020

This is a YouTube talk about building and breaking machine learning systems, presented at a security conference (GrayHat Red Team Village). The speaker is exploring whether to develop this content into a hands-on workshop where participants could practice these concepts.

Embrace The Red
Oct 28, 2020

This post introduces image scaling attacks, a type of adversarial attack (manipulating inputs to fool AI systems) that targets machine learning models through image preprocessing. The author discovered this attack concept while preparing demos and references academic research on understanding and preventing these attacks.

Embrace The Red
Embrace The Red
Oct 22, 2020

This post demonstrates how to use the Adversarial Robustness Toolbox (ART, an open-source library created by IBM for testing machine learning security) to generate adversarial examples, which are modified images designed to trick AI models into making wrong predictions. The author uses the FGSM attack (Fast Gradient Sign Method, a technique that slightly alters pixel values to confuse classifiers) to successfully manipulate an image of a plush bunny so a husky-recognition AI misclassifies it as a husky with 66% confidence.

Embrace The Red
Oct 20, 2020

This item is promotional content for a conference talk about attacking and defending machine learning systems, presented at GrayHat 2020's Red Team Village. The speaker created an introductory video for a session titled 'Learning by doing: Building and breaking a machine learning system,' scheduled for October 31st, 2020.

Embrace The Red
Embrace The Red

Fix: Microsoft Security Response Center (MSRC) confirmed the vulnerability and fixed it, with the fix released in October 2020 as documented in their security bulletin.

Embrace The Red