{"data":{"id":"1fe818e7-89e5-4c59-bdb0-a3e46a034bc5","title":"Wiz Code Week Recap: Securing AI Native Development   ","summary":"AI models can now find and exploit software vulnerabilities faster than security teams can defend against them, creating urgent security challenges for AI-driven development. Wiz addressed this by launching an AI-BOM (a tool that automatically catalogs AI frameworks, models, and IDE extensions like GitHub Copilot and Cursor) to give security teams visibility into how AI tools interact with their data, plus embedding security guardrails directly into developer IDEs through plugins that catch hardcoded secrets, misconfigurations, and AI-specific risks like prompt injection (tricking an AI by hiding instructions in its input) before code is committed.","solution":"Wiz Code plugins for AI-native IDEs (like Claude Code and Cursor) embed security directly into development workflows using pre-commit hooks (automated checks that run before code is saved) to catch hardcoded secrets, IaC (infrastructure-as-code) misconfigurations, vulnerabilities, and AI-specific issues. Additionally, Wiz Skills allow developers to automatically pull active security issues from the Wiz Security Graph and apply fixes directly in the IDE using the Wiz Green Agent, which generates fixes based on full code-to-cloud context.","labels":["security","industry"],"sourceUrl":"https://www.wiz.io/blog/wiz-code-week-recap","publishedAt":"2026-04-29T13:58:15.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":[],"issueType":"news","affectedPackages":null,"affectedVendors":["Google","Microsoft"],"affectedVendorsRaw":["Google Gemini Code Assist","GitHub Copilot","Cursor","Claude Code","Wiz"],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":"2026-04-29T13:58:15.000Z","capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":["integrity","confidentiality"],"aiComponentTargeted":"agent","llmSpecific":true,"classifierConfidence":0.85,"researchCategory":null,"atlasIds":null}}