Machine Learning Attack Series: Adversarial Robustness Toolbox Basics
Summary
This post demonstrates how to use the Adversarial Robustness Toolbox (ART, an open-source library created by IBM for testing machine learning security) to generate adversarial examples, which are modified images designed to trick AI models into making wrong predictions. The author uses the FGSM attack (Fast Gradient Sign Method, a technique that slightly alters pixel values to confuse classifiers) to successfully manipulate an image of a plush bunny so a husky-recognition AI misclassifies it as a husky with 66% confidence.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2020/husky-ai-adversarial-robustness-toolbox-testing/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%