aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
17 items

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

infonews
industry
Mar 20, 2026

This technology news roundup covers OpenAI's plan to build an autonomous AI researcher (a fully automated agent-based system that can solve complex problems independently), with an AI research intern prototype expected by September 2026 and a full multi-agent system by 2028. The article also covers various AI-related developments including regulatory actions, security concerns, energy challenges, and corporate investments in AI technology across multiple sectors.

MIT Technology Review

5 key priorities for your RSAC 2026 agenda

infonews
securitypolicy

What it takes to win that CSO role

infonews
security
Mar 16, 2026

This article discusses how the Chief Security Officer (CSO) and Chief Information Security Officer (CISO) roles have evolved from technical positions focused on perimeter defense (protecting network boundaries) into strategic leadership roles reporting to CEOs, where leaders must now govern emerging risks like shadow AI (unauthorized AI tools used without approval) and generative AI while also acting as business enablers rather than blockers. Modern CSOs are expected to balance security with business continuity, address regulatory compliance strategically, and help organizations achieve their goals rather than simply prevent risks.

The Evolution of AI Compliance Assistance from Reactive Support to Co-Agency

inforesearchPeer-Reviewed
policy

AWS launches a new AI agent platform specifically for healthcare

infonews
industry
Mar 5, 2026

AWS launched Amazon Connect Health, an AI agent-powered platform (software that completes complex tasks automatically) designed to help healthcare organizations automate administrative work like appointment scheduling and patient records. The platform is HIPAA-eligible (meets healthcare privacy and security standards) and integrates with existing electronic health record systems, marking AWS's first major AI agent product in a regulatory-compliant healthcare offering.

Boards don’t need cyber metrics — they need risk signals

infonews
security
Feb 25, 2026

Security teams typically report many activity metrics (like blocked attacks and patched vulnerabilities), but experts argue that boards need different information: risk signals that show whether danger is increasing or decreasing and how fast the organization detects and contains problems. Effective board-level security reporting should focus on business impact (financial loss, regulatory exposure, operational disruption) rather than technical details, using measures like detection speed and containment time that non-technical decision-makers can understand.

So verändert KI Ihre GRC-Strategie

infonews
policysecurity

Tesla adding Grok AI chatbot to its cars in the UK, Europe amid regulatory probes

infonews
safetypolicy

What CISOs need to know about the OpenClaw security nightmare

highnews
securitysafety

The Download: inside the QuitGPT movement, and EVs in Africa

infonews
industry
Feb 11, 2026

The QuitGPT movement is a growing campaign where users are canceling their ChatGPT subscriptions due to frustration with the chatbot's capabilities and communication style, with complaints flooding social media platforms in recent weeks. The article also covers several other tech stories, including potential cost competitiveness of electric vehicles in Africa by 2040, social media companies agreeing to independent safety assessments for teen mental health protection, and regulatory decisions affecting vaccine development.

Toward a Secure Framework for Regulating Artificial Intelligence Systems

inforesearchPeer-Reviewed
policy

AI Regulatory Sandbox Approaches: EU Member State Overview

inforegulatory
policy
May 2, 2025

AI regulatory sandboxes are controlled testing environments where companies can develop and test AI systems with guidance from regulators before releasing them to the public, as required by the EU AI Act (EU's new rules for artificial intelligence). These sandboxes help companies understand what regulations they must follow, protect them from fines if they follow official guidance, and make it easier for small startups to enter the market. Each EU Member State must create at least one sandbox by August 2, 2026, though different countries are taking different approaches to organizing them.

Small Businesses’ Guide to the AI Act

inforegulatory
policy
Feb 18, 2025

The EU AI Act includes specific support measures for small and medium-sized enterprises (SMEs, defined as companies with fewer than 250 employees and under €50 million in annual revenue). These measures include regulatory sandboxes (controlled testing environments for AI products outside normal regulatory rules), reduced compliance fees scaled to company size, simplified documentation forms, free training, and dedicated support channels to help SMEs follow the AI Act's requirements.

Why work at the EU AI Office?

inforegulatory
policy
Jun 7, 2024

This article describes the EU AI Office, a newly established regulatory organization within the European Commission tasked with enforcing the AI Act (the world's first comprehensive binding AI regulation) across the European Union. Unlike other AI safety institutes in other countries, the EU AI Office has actual enforcement powers to require AI model providers to fix problems or remove non-compliant models from the market. The office will conduct model evaluations, investigate violations, and work with international partners to shape global AI governance standards.

The AI Office is hiring

inforegulatory
policy
Mar 22, 2024

The European Commission is hiring AI specialists to work in the AI Office, which will enforce the EU's AI Act by overseeing compliance of general-purpose AI models (large AI systems available to the public). The office will have real regulatory powers to require companies to implement safety measures, restrict models, or remove them from the market, and will develop evaluation tools and benchmarks to identify dangerous AI behaviors.

The AI Office: What is it, and how does it work?

inforegulatory
policy
Mar 21, 2024

The European AI Office is a new EU regulator created to oversee general purpose AI (GPAI) models and systems, which are AI systems designed to perform a wide range of tasks, across all 27 EU Member States under the AI Act. It monitors compliance, analyzes emerging risks, develops evaluation capabilities, produces voluntary codes of practice for companies to follow, and coordinates enforcement between national regulators and international partners. The Office also supports small and medium businesses with compliance resources and oversees regulatory sandboxes, which are controlled environments where companies can test AI systems before full deployment.

AI Act Implementation: Timelines & Next steps

inforegulatory
policy
Feb 28, 2024

The EU AI Act is a regulatory framework that requires companies to comply with rules for different types of AI systems on specific timelines, starting with prohibitions on the riskiest AI uses within 6 months and expanding to cover high-risk AI systems (such as those used in law enforcement, hiring, or education) by 24 months after the law takes effect. The article outlines key compliance deadlines, secondary laws the EU Commission might create to clarify the rules, and guidance documents to help organizations understand how to follow the AI Act.

Mar 19, 2026

RSA Conference 2026 is fundamentally organized around AI security, with 40% of sessions focused on how AI affects cybersecurity across all tracks. CISOs face a dual challenge: adopting AI quickly to stay competitive while simultaneously securing enterprise systems against new threats that AI itself creates. The conference prioritizes five learning areas: securing the AI stack (including RAG workflows, LLM data pipelines, and prompt injection attacks), AI governance and regulatory compliance, managing non-human identities (AI agents and service accounts that now outnumber human users), addressing shadow AI risks (unsanctioned tools and AI-generated code), and implementing autonomous security operations.

CSO Online
CSO Online
safety
Mar 6, 2026

A banking group implemented a retrieval-augmented AI-powered compliance assistant (a system where AI pulls in external compliance documents to answer questions) to help with regulatory requirements while maintaining human oversight. The article identifies key challenges with this approach, including authority illusion (over-trusting the AI's answers), unclear responsibility for decisions, loss of human judgment about context, and gaps in understanding how the system works, then proposes a four-phase framework to help organizations move from passive AI assistants toward systems where AI and humans reason together.

AIS eLibrary (Journal of AIS, CAIS, etc.)
TechCrunch
CSO Online
Feb 24, 2026

As companies adopt generative and agentic AI (AI systems that can take actions autonomously), they need to update their GRC (Governance, Risk & Compliance, the framework for managing rules, risks, and regulatory requirements) programs to account for AI-related risks. According to a 2025 security report, about 1 in 80 requests from company devices to AI services poses a high risk of exposing sensitive data, yet only 24% of companies have implemented comprehensive AI-GRC policies.

Fix: The source text recommends several explicit approaches: (1) Foster broad organizational acceptance of risk management across the company by promoting cooperation so all employees understand they must work together; (2) Develop both strategic and tactical approaches to define different types of AI tools, assess their relative risks, and weigh their potential benefits; (3) Use tactical measures including Secure-by-Design approaches (building security into AI tools from the start), initiatives to detect shadow AI (unauthorized AI use), and risk-based AI inventory and classification to focus resources on highest-impact risks without creating burdensome processes; (4) Make risks of specific AI measures transparent to business leadership rather than simply approving or rejecting AI use.

CSO Online
Feb 17, 2026

Tesla is adding Grok, an AI chatbot from Elon Musk's company xAI, to its vehicle infotainment systems (the dashboard computers that control entertainment and information) in the U.K. and nine other European markets. However, Grok has faced multiple regulatory investigations across Europe and Asia because it lacks safety guardrails, allowing users to create deepfake explicit images (fake videos or photos that look real but are computer-generated) of real people without consent, generate hate speech, and interact inappropriately with minors. Safety researchers also worry that adding chatbots to cars creates a "distraction layer" that could increase driver distraction while driving.

CNBC Technology
Feb 12, 2026

OpenClaw is a popular open-source AI agent orchestration tool (software that coordinates multiple AI agents to complete tasks) that runs locally and can connect to apps like WhatsApp, Gmail, and smart home devices, but security researchers have found it to be critically insecure by default. Over 42,000 exposed instances have been discovered with authentication bypass vulnerabilities (weaknesses that let attackers skip login requirements) and potential remote code execution (RCE, where attackers can run commands on affected systems), exposing organizations to data breaches, credential theft, and regulatory violations.

Fix: Rich Mogull, chief analyst at Cloud Security Alliance, recommends that "CISOs prohibit its use altogether." He states: "The answer has to be 'no.' There is no security model."

CSO Online
MIT Technology Review
research
Oct 1, 2025

This paper addresses the lack of technical tools for regulating high-risk AI systems by proposing SFAIR (Secure Framework for AI Regulation), a system that automatically tests whether an AI meets regulatory standards. The framework uses a temporal self-replacement test (similar to certification exams for human operators) to measure an AI's operational qualification score, and protects itself using encryption, randomization, and real-time monitoring to prevent tampering.

Fix: The paper proposes SFAIR as a comprehensive framework for securing AI regulation. Key technical safeguards mentioned include: randomization, masking, encryption-based schemes, and real-time monitoring to secure SFAIR operations. Additionally, the framework leverages AMD's Secure Encrypted Virtualization-Encrypted State (SEV-ES, a processor-level security technology that encrypts AI system memory) for enhanced security. The source code of SFAIR is made publicly available.

IEEE Xplore (Security & AI Journals)
EU AI Act Updates

Fix: The source explicitly mentions several mitigation measures for SME compliance: (1) Regulatory sandboxes with free access and simple procedures for SMEs to test AI systems in controlled conditions, (2) Assessment fees proportional to SME size with regular review to lower costs, (3) Simplified technical documentation forms developed by the Commission and accepted by national authorities, (4) Training activities tailored to SMEs, (5) Dedicated guidance channels to answer compliance questions, and (6) Proportionate obligations for AI model providers with separate Key Performance Indicators for SMEs under the Code of Practice.

EU AI Act Updates
EU AI Act Updates
EU AI Act Updates
EU AI Act Updates
EU AI Act Updates