Security Spotlight: AppSec to AI, a Security Engineer's Journey
Summary
This article compares traditional application security (AppSec) practices with AI security, noting that familiar principles like input validation and authentication apply to both, but AI systems introduce unique risks. New attack types specific to AI, such as prompt injection (tricking an AI by hiding instructions in its input), model poisoning (tampering with training data), and membership inference attacks (determining if specific data was in training), require security engineers to develop new defensive strategies beyond traditional code-level vulnerability management.
Classification
Affected Vendors
Related Issues
CVE-2024-37052: Deserialization of untrusted data can occur in versions of the MLflow platform running version 1.1.0 or newer, enabling
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
Original source: https://protectai.com/blog/security-spotlight-appsec-to-ai
First tracked: March 13, 2026 at 12:56 PM
Classified by LLM (prompt v3) · confidence: 85%