Watermarking Language Models Through Language Models
Summary
Researchers developed a new method for watermarking LLM outputs (adding hidden markers to prove ownership and track content) using a three-part system that works only through input prompts, without needing access to the model's internal parameters. The approach uses one AI to create watermarking instructions, another to generate marked outputs, and a third to detect the watermarks, making it work across different LLM types including both proprietary and open-source models.
Classification
Affected Vendors
Related Issues
Original source: http://ieeexplore.ieee.org/document/11146861
First tracked: March 16, 2026 at 04:14 PM
Classified by LLM (prompt v3) · confidence: 92%