So verändert KI Ihre GRC-Strategie
Summary
As companies adopt generative and agentic AI (AI systems that can take actions autonomously), they need to update their GRC (Governance, Risk & Compliance, the framework for managing rules, risks, and regulatory requirements) programs to account for AI-related risks. According to a 2025 security report, about 1 in 80 requests from company devices to AI services poses a high risk of exposing sensitive data, yet only 24% of companies have implemented comprehensive AI-GRC policies.
Solution / Mitigation
The source text recommends several explicit approaches: (1) Foster broad organizational acceptance of risk management across the company by promoting cooperation so all employees understand they must work together; (2) Develop both strategic and tactical approaches to define different types of AI tools, assess their relative risks, and weigh their potential benefits; (3) Use tactical measures including Secure-by-Design approaches (building security into AI tools from the start), initiatives to detect shadow AI (unauthorized AI use), and risk-based AI inventory and classification to focus resources on highest-impact risks without creating burdensome processes; (4) Make risks of specific AI measures transparent to business leadership rather than simply approving or rejecting AI use.
Classification
Affected Vendors
Original source: https://www.csoonline.com/article/4030328/so-verandert-ki-ihre-grc-strategie.html
First tracked: February 25, 2026 at 03:00 AM
Classified by LLM (prompt v3) · confidence: 72%