llm-all-models-async 0.1
infonewsLLM-Specific
industry
Source: Simon Willison's WeblogMarch 31, 2026
Summary
The llm-all-models-async 0.1 plugin allows synchronous (blocking) AI models from LLM plugins to work as asynchronous (non-blocking) models by running them in a thread pool (a group of worker threads that handle tasks in parallel). This solves a compatibility problem where Datasette, which only supports async models, couldn't use sync-only plugins like llm-mrchatterbox.
Classification
Attack SophisticationTrivial
AI Component TargetedFramework
Affected Vendors
Monthly digest — independent AI security research
Original source: https://simonwillison.net/2026/Mar/31/llm-all-models-async/#atom-everything
First tracked: March 31, 2026 at 08:00 PM
Classified by LLM (prompt v3) · confidence: 85%