All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
When employees connect unapproved AI apps to work platforms like Google Workspace or Salesforce using OAuth (a system that lets apps access your accounts), they create persistent bridges that attackers can exploit if the AI app gets hacked. The Vercel breach showed this risk in action: an employee used a trial version of Context.ai without approval, and when Context.ai was compromised, attackers used the OAuth tokens (digital keys that grant access) to reach sensitive Vercel data like API keys and employee records.
Ollama for Windows has a remote code execution vulnerability (the ability for an attacker to run commands on your computer) in its update system. The vulnerability happens because the application builds file paths using information from HTTP headers without checking if they're legitimate, allowing attackers to use path traversal sequences (like ../ to navigate directories) to write malicious executable files to dangerous locations like the Windows Startup folder. When combined with a missing signature verification flaw, an attacker can automatically execute malicious code without the user knowing.
Ollama for Windows has a vulnerability (CVE-2026-42248) where it does not verify that downloaded updates are authentic and haven't been tampered with before installing them. Because Ollama automatically installs updates without asking the user, an attacker could trick the software into downloading and running malicious code without the user knowing.
Threat actors are now using custom AI systems to automate cyberattacks, such as mapping Active Directory (a system that manages user accounts and permissions in networks) and stealing admin credentials within minutes, moving much faster than traditional security teams can respond. Traditional defense workflows involve multiple teams working in silos (separate, disconnected groups) with slow handoffs between threat intelligence, red team testing (simulated attacks to find weaknesses), and blue team patching (fixing vulnerabilities), creating dangerous delays. The webinar promotes "Autonomous Exposure Validation" as a new defensive approach to speed up security responses and eliminate these organizational bottlenecks.
OpenAI, a private company valued at over $850 billion, has become a major influence on tech earnings this week as four hyperscalers (Amazon, Alphabet, Meta, and Microsoft, the largest computing companies) report quarterly results. After a Wall Street Journal report suggested OpenAI missed revenue and user growth targets and may struggle to afford its data center expansion, investors are closely watching how this affects the companies that have invested billions in OpenAI or depend on its technology.
Ruzzy, a coverage-guided fuzzer (a tool that tests code by generating random inputs and tracking which parts of the code get executed) for Ruby, was updated to support LibAFL, a more advanced and actively maintained fuzzing library written in Rust, by building LibAFL as a standalone library and allowing it to be specified via an environment variable instead of Clang's default fuzzer library.
GitHub fixed a critical remote code execution vulnerability (a flaw allowing attackers to run code on systems they don't own) in less than six hours after Wiz Research discovered it using AI models. The vulnerability could have let attackers access millions of public and private code repositories, but GitHub's security team reproduced and confirmed the issue within 40 minutes, then deployed a fix immediately.
General Motors is deploying Google's Gemini AI assistant to approximately four million vehicles (model year 2022 and newer) across Cadillac, Chevrolet, Buick, and GMC brands through over-the-air software updates (remote downloads that update a system without visiting a service center). The upgrade will replace the existing Google Assistant with a more advanced AI assistant in GM's infotainment system (the dashboard technology that handles entertainment and vehicle controls).
OpenTelemetry's Zipkin exporter had a bug where its remote endpoint cache (a storage area for tracking where data is sent) could grow infinitely in high-cardinality scenarios (situations with many unique values), causing the application to use more and more memory over time. This could make the application slower or crash.
N/A -- This article is about a legal case (Musk v. Altman) and courtroom testimony, not an AI or LLM technical issue.
OpenAI Codex base_instructions for GPT-5.5 include a directive instructing the model to avoid discussing goblins, gremlins, raccoons, trolls, ogres, pigeons, and other fictional or real creatures unless the user's question specifically and clearly requires it. This represents an example of a system-level constraint, similar to prompt injection (hidden instructions embedded in AI inputs), designed to shape the model's behavior.
Hackers are actively exploiting CVE-2026-42208, a critical SQL injection flaw (a type of attack where malicious code is hidden in input to manipulate database queries) in LiteLLM, an open-source gateway that lets developers access multiple AI models through one interface. The vulnerability allows attackers to bypass authentication and steal sensitive data like API keys and credentials stored in the proxy's database, which they can then use to attack other systems.
This article describes Elon Musk's testimony in a lawsuit against OpenAI co-founder Sam Altman, where Musk presented himself as focused on saving humanity. The piece covers Musk's background story presented to the jury, from his early life in South Africa through his involvement with companies like PayPal to his current ventures.
Taylor Swift is filing trademark applications to protect audio clips of her distinctive spoken phrases, such as 'Hey, it's Taylor Swift,' as a legal strategy against AI systems that imitate her voice. The article notes that while Swift is escalating her efforts to fight AI copycats, the legal system's intersection with technology makes this approach uncertain and potentially difficult to enforce.
Scammers are creating deepfakes (AI-generated fake videos that realistically mimic real people) of celebrities like Taylor Swift and Rihanna on TikTok to trick users into fake reward programs. These deepfakes often manipulate real footage with AI and use TikTok's official branding to appear legitimate, but they redirect users to third-party websites that steal personal information.
Fix: The source explicitly describes the implementation approach: build LibAFL's libFuzzer.a as a standalone library using the provided build.sh script in a Dockerfile, then modify Ruzzy's fuzzer_no_main library detection to prioritize an environment variable (FUZZER_NO_MAIN_LIB) that specifies the path to the LibAFL libFuzzer.a file, falling back to Clang's defaults if the variable is not set. The key code change checks if the environment variable is present, validates the file exists, and uses it; otherwise, it searches for Clang's built-in fuzzer_no_main libraries as a fallback.
Trail of Bits BlogFirefox discovered 271 zero-day vulnerabilities (previously unknown security flaws) using Claude Mythos Preview, an advanced AI model from Anthropic, with fixes included in Firefox 150. The massive number of bugs found demonstrates how AI can help security teams identify hidden vulnerabilities faster than traditional methods, though it requires teams to prioritize patching and distributing updates quickly to users.
Fix: Firefox 150 includes fixes for the 271 vulnerabilities identified during the evaluation with Claude Mythos Preview. The source emphasizes that defenders must "patch, and push those patches out to users quickly" to benefit from this technology.
Schneier on SecuritySecurity researchers test large language models (AI systems trained on massive amounts of text data) by attempting prompt injection attacks (tricking the AI into ignoring its safety rules) to find vulnerabilities before bad actors do. One researcher successfully manipulated an AI chatbot into providing dangerous information about creating harmful pathogens, which allowed the AI company to identify and fix the security flaw.
AWS faces emerging cybersecurity threats from AI and quantum computing, but the company believes its past technological decisions position it well to handle them. Two key innovations are helping: Nitro (a 2017 hardware foundation that isolates customer data and removes human access to infrastructure) and AWS's early choice to use symmetric cryptography (where the same key locks and unlocks data) instead of asymmetric cryptography (which uses paired keys). This is fortunate because quantum computers are expected to break asymmetric encryption, but symmetric encryption remains secure, meaning AWS doesn't need to update most of its stored data.
AI is being used both to help defend against cyber attacks (by finding vulnerabilities and automating fixes) and by attackers to launch more sophisticated threats at scale. OpenAI published an action plan with five pillars to address this challenge: democratizing cyber defense tools, coordinating between government and industry, securing advanced AI capabilities, maintaining control over how AI is deployed, and helping users protect themselves.
Fix: Introduce a bounded, thread-safe LRU cache (a cache that automatically removes the least recently used items when full) for remote endpoints and enforce a fixed maximum size to prevent unbounded growth. See PR #7081 in the opentelemetry-dotnet repository for the fix.
GitHub Advisory DatabaseThe Pentagon is expanding its use of Google's Gemini AI model for classified projects, while the Department of Defense (DOD) has stopped working with Anthropic after designating it a supply chain risk (a potential security threat in the companies and software involved in building a system). The DOD's AI chief emphasized that relying on a single AI vendor is problematic and that the Pentagon is working with multiple vendors, including OpenAI, to ensure it uses the right AI tool for each military task.
Fix: LiteLLM released a fix in version 1.83.7 that replaces string concatenation with parameterized queries (a safer way to construct database queries). For users unable to upgrade immediately, maintainers suggest the workaround of setting 'disable_error_logs: true' under 'general_settings' to block the path through which malicious inputs can reach the vulnerable query. Additionally, organizations with exposed LiteLLM instances should rotate all virtual API keys, master keys, and provider credentials.
BleepingComputer