LLM Apps: Don't Get Stuck in an Infinite Loop! ๐ต๐ฐ
Summary
An attacker can use indirect prompt injection (tricking an AI by hiding malicious instructions in data it reads) to make an LLM call its own tools or plugins repeatedly in a loop, potentially increasing costs or disrupting service. While ChatGPT users are mostly protected by subscription pricing, call limits, and a manual stop button, this technique demonstrates a real vulnerability in how LLM applications handle recursive tool calls.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2023/llm-cost-and-dos-threat/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) ยท confidence: 85%