aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3122 items

Google’s Gemini AI is getting a bigger role across Docs, Sheets, and Slides

infonews
industry
Mar 10, 2026

Google is expanding its Gemini AI assistant into more of its Workspace apps, including a new chat window in Google Docs that lets users describe documents for AI to create, AI-powered spreadsheet generation, and a Gemini-powered search feature in Drive. The Gemini assistant can pull information from the web, Drive, Gmail, and other sources to help users with their work.

The Verge (AI)

Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive

infonews
industry
Mar 10, 2026

Google is adding new Gemini AI features to its productivity apps (Docs, Sheets, Slides, and Drive) that help users create and organize content faster by pulling information from their emails, files, and the web. These tools include features like automatically drafting documents, generating formatted spreadsheets, creating slides that match your theme, and searching across files using natural language (plain English questions instead of technical search terms). The goal is to let users accomplish tasks within Google's apps without switching to separate tools.

The Download: AI’s role in the Iran war, and an escalating legal fight

infonews
policyindustry

Sandbar secures $23M Series A for its AI note-taking ring

infonews
industry
Mar 10, 2026

Sandbar, a startup founded by former Meta employees, raised $23 million to develop the Stream ring, a wearable device with a microphone that records notes and lets users chat with an AI assistant through a phone app. The ring's microphone is off by default and only activates when users lift their hand to their face, which signals intent for private note-taking rather than recording surrounding conversations.

Trump's war predictions, Pershing Square files for IPO, Anthropic's lawsuit and more in Morning Squawk

infonews
policy
Mar 10, 2026

Anthropic, an AI company, filed a lawsuit against the federal government after the Pentagon blacklisted it as a 'supply chain risk' (a security classification typically reserved for foreign adversaries), claiming the move is unlawful and causes irreparable harm. The blacklisting followed Anthropic's disagreement with the Pentagon over how its AI systems could be used. Defense experts worry this precedent could harm U.S. competitiveness by cutting off access to a major American AI vendor.

Global Cyber Attacks Remain Near Record Highs in February 2026 Despite Ransomware Decline

infonews
security
Mar 10, 2026

In February 2026, organizations worldwide faced an average of 2,086 cyber attacks per week, a 9.6% increase from the previous year, indicating that high attack volumes are now a constant threat rather than a temporary spike. While ransomware attacks declined compared to last year, overall attack activity remains near record levels due to automation, expanded digital systems, and security risks from enterprise GenAI (generative AI used by businesses) usage.

Escape Raises $18 Million to Automate Pentesting

infonews
industry
Mar 10, 2026

Escape, a company that uses AI agents (software programs that act autonomously to complete tasks) to automate pentesting (simulated security attacks to find vulnerabilities), has raised $18 million in funding. The company plans to use this money to improve its AI capabilities and expand its teams.

How to Stop AI Data Leaks: A Webinar Guide to Auditing Modern Agentic Workflows

infonews
securitysafety

Family of child injured in Canada school shooting sues OpenAI

infonews
safetypolicy

Oracle earnings will show whether its expensive AI bet is starting to pay off

infonews
industry
Mar 10, 2026

Oracle is reporting earnings on Tuesday as investors try to determine whether its massive investment in AI infrastructure is profitable. The company raised $50 billion in financing (debt and equity) to build data centers, mainly to serve OpenAI, and bond investors are watching closely because Oracle had to borrow heavily compared to other major cloud computing companies, raising concerns about its financial health and credit rating.

Improving instruction hierarchy in frontier LLMs

inforesearchBlog Research
safety

Meta’s deepfake moderation isn’t good enough, says Oversight Board

mediumnews
safetypolicy

Auditing the Gatekeepers: Fuzzing "AI Judges" to Bypass Security Controls

infonews
securityresearch

New ways to learn math and science in ChatGPT

infonews
industry
Mar 10, 2026

ChatGPT has introduced new interactive visual explanations for over 70 math and science concepts, allowing learners to manipulate variables and see real-time effects on graphs and outcomes instead of just reading static explanations. Research suggests that this type of interactive, visual learning helps students build stronger conceptual understanding compared to traditional instruction. The feature is now available globally to all ChatGPT users across all plans.

Jailbreaking the F-35 Fighter Jet

infonews
security
Mar 10, 2026

This blog discusses the F-35 fighter jet and mentions claims that Israel has 'jailbroken' (modified the software to bypass manufacturer restrictions on) its version of the aircraft, the F-35I Adir, to operate independently from US control systems. The post explores the technical and political complications of modifying highly restricted military software, including concerns about backdoors (hidden access points that could allow unauthorized control), supply chain dependencies, and international trade consequences.

OpenAI to acquire Promptfoo to strengthen AI agent security testing

infonews
securityindustry

You Could Be Next

infonews
industrypolicy

Why access decisions are becoming the weakest link in identity security

infonews
security
Mar 10, 2026

Organizations often focus on authentication (proving who someone is) through tools like MFA (multi-factor authentication, requiring multiple verification methods) and SSO (single sign-on, a centralized login system), but the real security weakness is authorization—deciding what people should actually access. Many companies only govern a small fraction of their applications and systems, leaving legacy systems, test environments, and shadow IT tools outside formal security controls, which attackers deliberately target.

Nvidia plans open-source AI agent platform ‘NemoClaw’ for enterprises: Wired

infonews
industry
Mar 10, 2026

Nvidia is planning to launch NemoClaw, an open-source platform for AI agents (specialized AI tools that can reason, plan, and act independently on complex tasks) targeting enterprise companies like Salesforce and Google. The platform will allow these companies to deploy AI agents to perform work tasks and is expected to include security and privacy tools, with early access offered to partners who contribute to the project.

When AI safety constrains defenders more than attackers

mediumnews
securitysafety
Previous17 / 157Next
TechCrunch
Mar 10, 2026

This newsletter covers multiple AI and technology developments, including AI's expanding role in military decision-making during the Iran conflict through 'vibe-coded' intelligence dashboards (AI systems that present information in visually appealing but potentially unreliable formats), legal disputes between AI companies and governments, and emerging threats like GPS jamming in the Middle East. The piece also highlights concerns about AI cloning real people's voices without consent, developments in AI agents, and psychological effects of AI companions on users.

MIT Technology Review
TechCrunch
CNBC Technology
Check Point Research
SecurityWeek
Mar 10, 2026

AI Agents (software programs that automatically perform tasks like sending emails or moving data) create security risks because they have broad access to sensitive information with little oversight, making them targets for hackers who can trick them into leaking company secrets. Traditional security tools were designed to protect human users, not autonomous digital workers, leaving AI agents largely invisible to security teams. The article promotes an upcoming webinar that promises to explain how hackers target these agents and how to secure them without overly restricting their capabilities.

The Hacker News
Mar 10, 2026

A family is suing OpenAI after their 12-year-old daughter was critically injured in a Canadian school shooting, claiming that OpenAI knew the suspect was planning an attack through ChatGPT conversations but failed to alert authorities. The suspect's account was banned in June 2025 after employees flagged messages about gun violence as indicating imminent harm, but police were never notified, and the suspect later opened a second account to continue planning.

Fix: According to OpenAI's statement, the company has implemented several changes: enlisting mental health and behavioral experts to assess cases, making the criteria for police referral more flexible, strengthening detection systems to prevent evasion of safeguards, and establishing a direct point of contact with Canadian law enforcement to quickly flag cases with potential for real-world violence. OpenAI's CEO also pledged to strengthen protocols on notifying police about potentially harmful interactions.

BBC Technology
CNBC Technology
research
Mar 10, 2026

AI systems receive instructions from multiple sources (system policies, developers, users, and online data), and models must learn to prioritize the most trustworthy ones to stay safe. When models treat untrusted instructions as authoritative, they can be tricked into revealing private information, following harmful requests, or falling victim to prompt injection (hidden malicious instructions hidden in input data). OpenAI's solution uses a clear instruction hierarchy (System > developer > user > tool) and trains models with IH-Challenge, a reinforcement learning dataset designed to teach models to follow high-priority instructions even when lower-priority ones conflict with them.

Fix: OpenAI's models are trained on a clear instruction hierarchy where System instructions have highest priority, followed by developer instructions, then user instructions, then tool outputs. The company also created IH-Challenge, a reinforcement learning training dataset that generates conversations with conflicting instructions where high-priority instructions are kept simple and objectively gradable, ensuring models learn to prioritize correctly without resorting to useless shortcuts like over-refusing benign requests.

OpenAI Blog
Mar 10, 2026

Meta's Oversight Board (a semi-independent group that advises Meta on content moderation) found that Meta's methods for detecting deepfakes (AI-generated fake videos or images) are not strong enough to stop misinformation from spreading quickly during conflicts like the Iran war. The Board is calling on Meta to improve how it identifies and labels AI-generated content on Facebook, Instagram, and Threads.

The Verge (AI)
Mar 10, 2026

Researchers discovered that AI judges (LLMs acting as automated security gatekeepers to enforce safety policies) can be manipulated through prompt injection (tricking an AI by hiding instructions in its input) using stealthy formatting symbols rather than obvious gibberish. They created a tool called AdvJudge-Zero, a fuzzer (software that finds vulnerabilities by testing with unexpected inputs), which automatically identifies innocent-looking character sequences that exploit the model's decision-making logic to bypass security controls.

Fix: Palo Alto Networks customers are better protected through Prisma AIRS and the Unit 42 AI Security Assessment service. Organizations concerned about potential compromise can contact the Unit 42 Incident Response team.

Palo Alto Unit 42
OpenAI Blog
Schneier on Security
Mar 10, 2026

OpenAI is acquiring Promptfoo, a company that builds testing tools for AI applications, to improve security checks for AI agents (autonomous systems that operate independently in business processes) as more companies deploy them in production. Promptfoo's tools test AI models against adversarial prompts (malicious inputs designed to trick the AI), including prompt injection (hiding instructions in user input to manipulate the AI) and jailbreak attempts, and check whether models follow safety guidelines. The acquisition reflects growing enterprise concern about AI vulnerabilities and a shift toward treating AI security testing as an essential part of AI development, similar to traditional application security practices.

Fix: According to the source, the solution involves integrating Promptfoo's technology into OpenAI Frontier, OpenAI's platform for building and operating AI coworkers. The source also describes a 'shift-left approach' to AI testing, where security evaluation is integrated early in the development stage to simulate vulnerabilities, and continuous evaluation occurs during real-time monitoring and prompt execution. Additionally, enterprises are embedding AI evaluation platforms into DevSecOps workflows (development and security operations processes) so that models, prompts, and agent behaviors can be tested continuously before and after deployment.

CSO Online
Mar 10, 2026

Katya, a freelance journalist turned content marketer, was recruited by Mercor to create training data for AI models by writing chatbot prompts and responses, work she initially enjoyed but which was abruptly canceled without warning. The article describes how machine learning (AI systems that improve by finding patterns in large amounts of data) relies on thousands of humans hired to generate and grade training examples, but gig workers like Katya face sudden project cancellations and job instability in this emerging industry.

The Verge (AI)
CSO Online
CNBC Technology
Mar 10, 2026

Enterprise AI systems deployed for security work are heavily restricted by safety guardrails (automated filters designed to prevent harmful outputs), while attackers freely use jailbroken models (AI systems with safety measures bypassed), open-source alternatives, and purpose-built malicious tools. This creates an asymmetry where defenders face routine refusals when requesting legitimate defensive content like phishing simulations or proof-of-concept code, while attackers can easily circumvent safety measures through prompt injection (tricking AI by hiding instructions in its input) and other well-documented techniques, giving them a significant operational advantage.

CSO Online