The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time
Summary
Attackers can use large language models (LLMs, AI systems trained on vast amounts of text to generate human-like responses) to create phishing pages that appear safe at first but transform into malicious sites after a victim visits them. The attack works by having a webpage secretly request the LLM to generate malicious JavaScript (code that runs in web browsers) using carefully crafted prompts that trick the AI into ignoring its safety rules, then assembling and running this code inside the victim's browser in real time. Because the malicious code is generated fresh each time and comes from trusted AI services, it bypasses traditional network security checks.
Solution / Mitigation
The source explicitly recommends runtime behavioral analysis to detect and block malicious activity at the point of execution within the browser. Palo Alto Networks customers are advised to use Advanced URL Filtering, Prisma AIRS, and Prisma Browser with Advanced Web Protection. Organizations are also encouraged to use the Unit 42 AI Security Assessment to help ensure safe AI use and development.
Classification
Affected Vendors
Related Issues
Original source: https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%