New tools, products, platforms, funding rounds, and company developments in AI security.
NFT standards like EIP 721 and EIP 1152 have a critical flaw: they don't require a cryptographic hash (a unique digital fingerprint) linking the actual content to the blockchain entry, so the blockchain only proves you own a token ID, not the actual digital asset it claims to represent. This means metadata files and images can be stored centrally (for example, on Amazon S3), modified by anyone with access, or disappear entirely, leaving you unable to prove ownership of the original content even if you have a local copy.
A speaker announced they would present at the Hong Kong Information Security Summit 2021 on March 9th, sharing insights on protecting modern remote workplaces from a red teaming perspective (the practice of simulating attacks to test security defenses). The talk, titled 'Red Team Strategies for Helping Protect the Modern Workplace,' focuses on security strategies relevant to distributed work environments.
This is a disclaimer page for educational material about red team strategies (methods used by authorized security testers to find vulnerabilities by simulating attacks). The content emphasizes that penetration testing (authorized attempts to break into systems to find security weaknesses) must have proper permission before being performed.
Microsoft analyzed the Sunburst attack (a major 2020 breach targeting SolarWinds software) and found that attackers used Cobalt Strike (a tool for command and control, letting attackers remotely direct compromised systems) alongside custom modifications to hide their backdoors in software. The attackers made each compromised system unique with different names and folder locations to avoid detection.
This article describes a method for creating security scores that compare different teams or services based on their security issues, helping organizations identify which areas need the most attention. The scoring system uses a multiplier (a scaling factor that makes severe issues count for much more than minor ones) to weight critical bugs more heavily than lower-severity ones, then sums these weighted values into a single score that can be displayed on a dashboard. By showing these scores to management, teams can have discussions about why some services have worse scores than others, which encourages improvements in security practices.
FireEye, a major security company, was attacked and adversaries accessed their internal red teaming tools (software used to test security by simulating attacks). The post warns that red teams are attractive targets for attackers and recommends implementing protective measures like honeypot machines (fake systems designed to detect intruders) and monitoring login attempts to quickly detect when attackers are trying to compromise their systems.
NAT Slipstreaming is a technique where visiting a malicious website can punch a hole through your router's firewall by exploiting the Application Layer Gateway (ALG, a feature that helps protocols like SIP, Session Initiation Protocol, work with firewalls). The attack works because the ALG is designed to allow devices inside a network to open firewall ports, but an attacker can abuse this intended functionality.
During a Red Team Operation (a simulated attack where security testers try to break into a company's systems), researchers discovered that Blue Team infrastructure, like endpoint agents (software that monitors and controls devices on a network), can be exploited for remote code execution (running commands on systems without authorization) if not properly protected. Companies often lack adequate security controls like MFA (multi-factor authentication, requiring multiple verification steps) and monitoring to prevent unauthorized access to these agents.
Offensive security teams are groups that test and challenge an organization's defenses by simulating attacks from an adversary's perspective. Rather than debating terminology like 'red team' or 'pentest' (security testing where authorized people attempt to break into systems), the source suggests defining these teams by the services they provide to customers within the organization, including business groups, defensive teams, developers, and employees.
The VS Code Python extension had a vulnerability where HTML and JavaScript code could be injected through error messages (called tracebacks, which show where a program failed) in Jupyter Notebooks, potentially allowing attackers to steal user information or take control of their computer. The vulnerability occurred because strings in error messages were not properly escaped (prevented from being interpreted as code), and could be triggered by modifying a notebook file directly or by having the notebook connect to a remote server controlled by an attacker.
This article argues against the idea that manual red teaming (the practice of simulating attacks to find security weaknesses) is dying due to automation. The author contends that red teaming is fundamentally about discovering unknown vulnerabilities and exploring creative attack strategies rather than just exploiting known bugs, and therefore cannot be fully automated even though adversaries will continue using AI and automation tools to scale their operations.
Survivorship bias is the logical error of focusing only on successes while ignoring failures, which can lead to incomplete understanding. The article applies this concept to red teaming (security testing where a team acts as attackers to find vulnerabilities) by noting that the MITRE ATT&CK framework (a database of known adversary tactics and techniques) only covers publicly disclosed threats, potentially causing security teams to overlook attack methods that haven't been publicly documented or aren't in the framework.
Fix: The source explicitly recommends several protective measures: (1) Create honeypot machines with fake credentials and trigger notifications and alerts when accessed; (2) Set up notifications for logon attempts and successful logons via email and forward events to the blue team (defensive security team); (3) Disable remote management endpoints and allow list source IP addresses in the firewall; (4) Lock down machines by blocking all inbound connections while allowing outbound ones using Windows command 'netsh advfirewall set allprofiles firewallpolicy blockinboundalways,allowoutbound' or Linux commands 'sudo ufw enable', 'sudo ufw default deny incoming', and 'sudo ufw default allow outgoing'; (5) Perform red vs. red testing (security assessments where one red team tests another) to verify the red team has proper security controls in place.
Embrace The RedThis is an index page summarizing a series of blog posts about machine learning security from a red teaming perspective (testing a system by simulating attacker behavior). The posts cover ML basics, threat modeling, practical attacks like adversarial examples (inputs designed to fool AI models), model theft, backdoors (hidden malicious code inserted into models), and how traditional security attacks (like weak access control) also threaten AI systems.
This post describes how Generative Adversarial Networks (GANs, a type of AI system where two neural networks compete to create realistic fake images) can be used to generate fake husky photos that trick an image recognition system called Husky AI into misclassifying them as real huskies. The author explains they investigated this attack method and references a GAN course to learn more about the technique.
AI and machine learning systems have caused serious problems in real-world situations, including Amazon's recruiting tool that discriminated against women, Microsoft's chatbot that became racist and sexist, IBM's cancer treatment recommendation system that doctors criticized, and Facebook's AI that made incorrect translations leading to someone's arrest. These examples show that AI systems can develop and spread biased predictions and failures with harmful consequences. The article highlights the importance of addressing bias when building and deploying AI systems responsibly.
Repudiation is a security threat where someone denies performing an action, such as replacing an AI model file with a malicious version. The source explains how to use auditd (a Linux auditing tool) and centralized monitoring systems like Splunk or Elastic Stack to create audit logs that track who accessed or modified files and when, helping prove or investigate whether specific accounts made changes.
Fix: To mitigate repudiation threats, the source recommends: (1) installing and configuring auditd on Linux using 'sudo apt install auditd', (2) adding file monitoring rules with auditctl (example: 'sudo auditctl -w /path/to/file -p rwa -k keyword' to audit read, write, and append operations), and (3) pushing audit logs to centralized monitoring systems such as Splunk, Elastic Stack, or Azure Sentinel for analysis and visualization.
Embrace The RedThis is a YouTube talk about building and breaking machine learning systems, presented at a security conference (GrayHat Red Team Village). The speaker is exploring whether to develop this content into a hands-on workshop where participants could practice these concepts.
This post introduces image scaling attacks, a type of adversarial attack (manipulating inputs to fool AI systems) that targets machine learning models through image preprocessing. The author discovered this attack concept while preparing demos and references academic research on understanding and preventing these attacks.
This post demonstrates how to use the Adversarial Robustness Toolbox (ART, an open-source library created by IBM for testing machine learning security) to generate adversarial examples, which are modified images designed to trick AI models into making wrong predictions. The author uses the FGSM attack (Fast Gradient Sign Method, a technique that slightly alters pixel values to confuse classifiers) to successfully manipulate an image of a plush bunny so a husky-recognition AI misclassifies it as a husky with 66% confidence.
This item is promotional content for a conference talk about attacking and defending machine learning systems, presented at GrayHat 2020's Red Team Village. The speaker created an introductory video for a session titled 'Learning by doing: Building and breaking a machine learning system,' scheduled for October 31st, 2020.
Fix: Microsoft Security Response Center (MSRC) confirmed the vulnerability and fixed it, with the fix released in October 2020 as documented in their security bulletin.
Embrace The Red