{"data":{"id":"5515b126-c20d-472c-8131-8c92072b41eb","title":"Introducing talkie: a 13B vintage language model from 1930","summary":"Researchers have created talkie, a 13 billion-parameter language model (a neural network with 13 billion adjustable values) trained entirely on English text from before 1931 to study how AI performs on historical knowledge and invention tasks. The base model uses only out-of-copyright data, but the chat version required fine-tuning (additional training to adjust behavior) with help from modern AI systems like Claude, which introduced some knowledge from after 1931 that the researchers are working to eliminate.","solution":"The talkie team states they 'aspire to eventually move beyond this limitation' by using 'vintage base models themselves as judges to enable a fully bootstrapped era-appropriate post-training pipeline,' meaning they plan to use talkie's own historical knowledge rather than modern AI systems for future training adjustments. However, this is described as a future goal, not a solution currently implemented.","labels":["research"],"sourceUrl":"https://simonwillison.net/2026/Apr/28/talkie/#atom-everything","publishedAt":"2026-04-28T02:47:42.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":[],"issueType":"news","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":["talkie","Claude","Anthropic"],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":"2026-04-28T02:47:42.000Z","capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":null,"aiComponentTargeted":"model","llmSpecific":true,"classifierConfidence":0.75,"researchCategory":null,"atlasIds":null}}