Automatic Tool Invocation when Browsing with ChatGPT - Threats and Mitigations
Summary
ChatGPT's browsing tool can be tricked into automatically invoking other tools (like image creation or memory management) when users visit websites containing hidden instructions, a vulnerability known as prompt injection (tricking an AI by hiding instructions in its input). While OpenAI added some protections, minor prompting tricks can bypass them, and this issue affects other AI applications as well.
Solution / Mitigation
For custom GPTs with AI Actions, creators can use the x-openai-isConsequential flag as a mitigation to put users in control, though the source notes this approach 'still lacks a great user experience, like better visualization to understand what the action is about to do.'
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/llm-apps-automatic-tool-invocations/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%