All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
As AI systems move into everyday business use, companies are discovering that the biggest challenge is not making AI faster or more powerful, but ensuring AI has the business context (the meaning and relationships behind data) it needs to make good decisions. Without this context, AI can produce answers quickly but make wrong choices, like a supply-chain system that optimizes inventory numbers without understanding which customers are strategically important or what tradeoffs matter during shortages. Organizations are now building data fabrics (systems that connect information across applications while preserving how the business actually works) as a foundation to give AI the context it needs to make decisions aligned with real business priorities.
Codex (an AI coding assistant) agent loops involved many back-and-forth API requests that added significant delays, especially as model inference speeds improved to nearly 1,000 tokens per second (words generated per second). To reduce this overhead, the team implemented WebSockets (a protocol that maintains a persistent connection between client and server, rather than opening a new connection for each request), along with caching and eliminating unnecessary network calls, achieving a 40% overall speedup in end-to-end performance.
Workspace agents are AI systems designed to automate repeatable workflows in your daily work by connecting to tools your team uses, rather than helping with one-off tasks. A workspace agent has three core components: a trigger (what starts it, like a schedule), a process with specialized skills (the steps it follows), and access to tools or systems (like Slack or a CRM). Unlike traditional deterministic workflows (where each step is explicitly defined and always the same), agents are probabilistic, meaning they use AI to interpret context and adjust their approach while staying within set instructions and guardrails.
OpenAI has introduced workspace agents in ChatGPT, which are AI tools that can handle complex work tasks and long-running workflows while respecting organizational permissions and controls. These agents, powered by Codex (a code-generating AI model), can automate tasks like report writing, code generation, and message responses, and can continue working in the cloud even when users are offline. Teams can create shared agents once and reuse them across ChatGPT and Slack, with examples including agents that review software requests, route product feedback, and manage vendor risk assessment.
Anthropic's Mythos AI model, a tool designed to find security weaknesses in software, was accessed by unauthorized users through a private online forum using a contractor's credentials and basic internet research techniques. The model is capable of identifying and exploiting vulnerabilities (security flaws) in major operating systems and web browsers, which is why Anthropic warned it could be dangerous if misused.
Anthropic is investigating a report that unauthorized users gained access to Mythos, an AI model designed to detect cybersecurity vulnerabilities that the company has kept private because it could be misused to enable cyber-attacks. A small group of people allegedly accessed the model without permission, prompting the company to look into the incident.
Terrarium, a Python sandbox developed by Cohere AI for running untrusted code in containers, has a critical vulnerability (CVE-2026-5752, CVSS 9.3) that allows attackers to execute arbitrary code with root privileges through JavaScript prototype chain traversal (a technique where attackers manipulate how JavaScript looks up object properties to access restricted functionality). Since the project is no longer maintained, a patch is unlikely, but CERT/CC recommends several defensive measures.
GitHub Copilot changed its pricing and usage limits for individual users because agentic workflows (AI agents that run long tasks automatically) consume far more computing resources than expected, with some users burning tokens (units of text processed by the AI) at much higher rates than before. The changes include pausing new individual plan signups, moving the most advanced Claude Opus 4.7 model to a more expensive $39/month tier, and switching to token-based usage limits tracked per session and per week instead of per-request charging.
Anthropic briefly updated its pricing page to move Claude Code (an AI coding agent feature) from the $20/month Pro plan to exclusive availability on $100-200/month Max plans, but quickly reverted the change after public backlash. Anthropic's Head of Growth claimed this was a test affecting only ~2% of new signups, though the change was widely visible and caused significant concern about affordability and lack of transparency.
SpaceX has announced a deal to either acquire Cursor, an AI-powered coding platform, for $60 billion or pay a $10 billion fee instead. This move aims to help xAI compete with other companies in the AI coding space, as major tech firms like Google and OpenAI are also investing heavily in their own AI programming tools.
Flowise, a tool with a visual interface for building customized AI flows, has a vulnerability before version 3.1.0 where authenticated attackers can execute arbitrary commands on the server. The flaw exists in the MCP (model context protocol) adapter's handling of stdio commands, where input sanitization checks fail to prevent attackers from combining safe commands like "npx" with code execution arguments to run malicious commands on the underlying operating system.
A serious vulnerability in Oracle Java SE and related products (JAXP component, which handles XML processing) allows attackers on the network to access sensitive data without needing to log in or interact with a user. The flaw affects multiple versions of Java and can be exploited through web services or untrusted code loaded in Java applications, with a CVSS score (0-10 severity rating) of 7.5 indicating high risk for data theft.
Optical coherence tomography (OCT, a technique that uses infrared light to create detailed 3D images of internal body structures like the retina) was invented by David Huang and colleagues at MIT and Harvard Medical School, and is now used in 40 million medical procedures annually. The technology emerged from Huang's work combining ultrafast lasers with interferometry (a measurement method that detects extremely precise time delays of light waves) to achieve micrometer-level resolution imaging of tissue. Huang's success came from collaborating across medical and engineering disciplines, and the invention has since been refined for new applications in eye imaging.
OpenAI released ChatGPT Images 2.0 on April 21, 2026, an image generation model (a system that creates pictures from text descriptions) that the company claims represents a major leap in capability. The author tested it against other models like Google's Gemini and Claude by asking them to generate Where's Waldo-style images with a hidden raccoon holding a ham radio, finding that gpt-image-2 produced more detailed and accurate results, especially at higher quality settings.
A critical remote code execution vulnerability (CVE-2026-34197, a flaw allowing attackers to run arbitrary commands on a system) was discovered in Apache ActiveMQ messaging software on April 7, but nearly two weeks later, over 6,500 unpatched instances remain exposed to the internet. Security experts emphasize that with AI tools now able to find vulnerabilities in minutes, organizations must move beyond slow manual patching processes to keep pace with rapidly weaponized exploits.
Flowise version 3.0.13 has a vulnerability in its CSV Agent node that allows attackers to run arbitrary code on the server without needing to log in. The flaw occurs because the CSV Agent's `run` method doesn't properly sandbox (isolate) Python code generated by an LLM, and the validation checks that try to block dangerous commands can be bypassed, letting attackers execute system commands through the LLM-generated script.
OpenAI has released ChatGPT Images 2.0, an updated image generator that uses new 'thinking capabilities' to search the web and create multiple images from a single prompt. The new version, powered by GPT Image 2, can generate more sophisticated images with better instruction-following, detail preservation, and text generation abilities, and is available to ChatGPT Plus, Pro, Business, and Enterprise subscribers.
Fix: The team implemented WebSockets as a persistent connection protocol for the Responses API instead of using multiple synchronous HTTP requests. Additionally, they applied caching to store rendered tokens and model configuration in memory to skip expensive tokenization and network calls, reduced network hop latency by eliminating intermediate service calls and directly contacting the inference service, and improved the safety stack to run classifiers faster.
OpenAI BlogAI tools like Anthropic's Mythos can find software vulnerabilities much faster than before, creating a problem: security teams must decide which vulnerabilities to fix first among thousands of options. Anthropic recommends using EPSS (Exploit Prediction Scoring System, a machine learning model that predicts which vulnerabilities are likely to be exploited in the next 30 days) to prioritize which vulnerabilities need immediate attention, similar to how weather forecasters predict whether you'll need an umbrella.
Fix: According to Anthropic's guidance: 'Patching the KEV (CISA's Known Exploited Vulnerabilities catalog) list first, and then everything above a chosen EPSS threshold will help you turn thousands of open CVEs into a manageable queue.' EPSS scores are machine-driven and can be applied across all CVEs with scores published daily, and have been incorporated into more than 120 security vendors' products.
CSO OnlineFix: CERT/CC advises the following mitigations: Disable features that allow users to submit code to the sandbox, if possible. Segment the network to limit the attack surface and prevent lateral movement. Deploy a Web Application Firewall to detect and block suspicious traffic, including attempts to exploit the vulnerability. Monitor container activity for signs of suspicious behavior. Limit access to the container and its resources to authorized personnel only. Use a secure container orchestration tool to manage and secure containers. Ensure that dependencies are up-to-date and patched.
The Hacker NewsMicrosoft Defender has a vulnerability in access control (the rules that decide what actions a user is allowed to perform) that could let an authorized attacker gain higher-level system permissions on a local computer. The vulnerability is currently being exploited by attackers in real-world attacks.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesOpenAI released Privacy Filter, an open-weight AI model designed to detect and remove personally identifiable information (PII, such as names, addresses, phone numbers, and account details) from text. The model uses context-aware language understanding rather than simple pattern matching, can run locally on a user's device to keep sensitive data from being sent to servers, and achieves state-of-the-art performance on privacy detection benchmarks. Developers can use, fine-tune, and integrate Privacy Filter into their own applications to build stronger privacy protections into AI systems.
Fix: Update Flowise to version 3.1.0 or later, where this vulnerability is fixed.
NVD/CVE DatabaseFix: Upgrade to patched versions 5.19.4 or 6.2.3 of ActiveMQ. Additionally, the source advises: create an automated software bill of materials (a detailed inventory of all software components) for every application using standards like CycloneDX so organizations can immediately identify which apps contain the vulnerable ActiveMQ software when a bug is announced, and implement automated patching and automated testing rather than relying on manual patch cycles.
CSO Online