{"data":{"id":"c62a3555-e77f-4975-a94b-c2c979bbd988","title":"Google AI Studio Data Exfiltration via Prompt Injection - Possible Regression and Fix","summary":"Google AI Studio had a vulnerability that allowed attackers to steal data through prompt injection (tricking an AI by hiding malicious instructions in its input), where a malicious file could trick the AI into exfiltrating other uploaded files to an attacker's server via image tags. The vulnerability appeared in a recent update but was fixed within 12 days of being reported to Google on February 17, 2024.","solution":"The issue was fixed by Google and did not reproduce after the company heard back about the report 12 days later (by approximately February 29, 2024). The ticket was closed as 'Duplicate' on March 3, 2024, suggesting the vulnerability may have also been caught through internal testing.","labels":["security"],"sourceUrl":"https://embracethered.com/blog/posts/2024/google-aistudio-mass-data-exfil/","publishedAt":"2024-04-07T23:00:30.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["prompt_injection","data_extraction"],"issueType":"news","affectedPackages":null,"affectedVendors":["Google"],"affectedVendorsRaw":["Google AI Studio","Gemini"],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":null,"capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":["confidentiality"],"aiComponentTargeted":"api","llmSpecific":true,"classifierConfidence":0.85,"researchCategory":null,"atlasIds":null}}