{"data":{"id":"52ec104b-db4a-4bf8-92a2-49d5b92ea394","title":"Machine Learning Attack Series: Backdooring models","summary":"This post discusses backdooring attacks on machine learning models, where an adversary gains access to a model file (the trained AI system used in production) and overwrites it with malicious code. The threat was identified during threat modeling, which is a security planning process where teams imagine potential attacks to prepare defenses. The post indicates it will cover attacks, mitigations, and how Husky AI was built to address this risk.","solution":"N/A -- no mitigation discussed in source.","labels":["security","research"],"sourceUrl":"https://embracethered.com/blog/posts/2020/husky-ai-machine-learning-backdoor-model/","publishedAt":"2020-09-18T21:59:47.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["model_poisoning"],"issueType":"news","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":["Husky AI"],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":null,"capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":["integrity"],"aiComponentTargeted":"model","llmSpecific":false,"classifierConfidence":0.75,"researchCategory":null,"atlasIds":null}}