All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
LangChain released version 1.2.10, which includes a bug fix for token counting on partial message sequences (a partial message sequence is a subset of messages in a conversation), dependency updates, and code refactoring to rename internal variables.
LangChain-core version 1.2.10 includes several updates: dependency bumps across multiple directories, a new ContextOverflowError (an exception raised when a prompt exceeds token limits) for Anthropic and OpenAI integrations, additions to model profiles for tracking text inputs and outputs, improved token counting for tool schemas (structured definitions of what functions an AI can call), and documentation fixes.
This is a game review for "Romeo Is a Dead Man," the first original game in 10 years from developer Suda51, and it criticizes the game for being disappointing and confusing. The reviewer notes that while Suda51 is known for making creative, unconventional games, this title fails to deliver, instead offering an unclear story filled with confusing references that persist throughout the 20-hour gameplay.
MarkUs is a web application for submitting and grading student assignments. Before version 2.9.1, instructors could upload a zip file to create assignments, but the application didn't properly validate the file paths inside the zip, allowing a path traversal attack (an exploit where attackers use special characters like "../" to write files outside the intended directory).
A research paper studied how to present large amounts of structured data (like SQL databases with thousands of tables) to AI language models in different formats (YAML, Markdown, JSON, and TOON) to help them generate correct code. The study found that more advanced models like GPT and Gemini performed much better than open-source models, and that using unfamiliar data formats like TOON actually made models less efficient because they spent extra effort trying to understand the new format.
Moltbook was an online platform where AI agents (software programs designed to act independently) interacted with each other, which some people saw as a preview of useful AI in the future, but it turned out to be mostly a social experiment and entertainment similar to a 2014 internet phenomenon called Twitch Plays Pokémon. The platform was flooded with crypto scams and many 'AI' posts were actually written by humans controlling the agents, revealing that truly helpful AI systems would need better coordination, shared goals, and shared memory to work together effectively.
N/A -- The provided content is a GitHub navigation menu and footer with no technical information about langchain-openai==1.1.8 or any AI/LLM-related issue.
CVE-2026-25904 is a security flaw in the Pydantic-AI MCP Run Python tool where the Deno sandbox (a restricted environment for running code safely) is configured too permissively, allowing Python code to access the localhost interface and perform SSRF attacks (server-side request forgery, where an attacker tricks a server into making unwanted requests). The project is archived and unlikely to receive a fix.
This article discusses major tech companies (Alphabet, Amazon, Microsoft, and Meta) planning to invest $600 billion in AI this year, while Persian Gulf countries are developing their own AI systems to reduce dependence on the United States. The piece raises questions about whether AI development can happen independently of US tech dominance.
Generative AI has created a widespread problem where institutions like literary magazines, academic journals, and courts are overwhelmed by AI-generated submissions, forcing them to either shut down or deploy AI tools to defend against the influx. This has created an 'arms race' where both sides use AI for opposing purposes, with potential risks to institutions but also some unexpected benefits, such as AI helping non-English-speaking researchers access writing assistance that was previously expensive.
Fix: This vulnerability is fixed in version 2.9.1. Update MarkUs to version 2.9.1 or later.
NVD/CVE DatabaseMicrosoft Office Word has a vulnerability where it trusts user inputs when making security decisions, allowing an authorized attacker to gain elevated privileges (higher access level) on a local computer. This vulnerability is currently being exploited by attackers in real-world attacks.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Due date: 2026-03-03. See https://msrc.microsoft.com/update-guide/vulnerability/CVE-2026-21514 for specific vendor instructions.
CISA Known Exploited VulnerabilitiesMicrosoft Windows Desktop Window Manager has a type confusion vulnerability (a bug where the software treats data as the wrong type, causing incorrect behavior) that allows an authorized attacker to gain higher-level access on a local computer. This vulnerability is currently being exploited by attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesMicrosoft Windows Remote Access Connection Manager has a NULL pointer dereference (a bug where the software tries to use a memory location that doesn't exist), which allows an attacker to crash the service and prevent it from working. This vulnerability is currently being exploited by attackers in real-world attacks.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesMicrosoft Windows Remote Desktop Services (a tool that lets users connect to computers remotely) has a privilege escalation vulnerability (a bug that lets an authorized user gain higher-level access than they should have) that could let an attacker who already has some access to the system gain even more control. This vulnerability is currently being actively exploited by attackers.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. For specific patches or updates, consult https://msrc.microsoft.com/update-guide/vulnerability/CVE-2026-21533.
CISA Known Exploited VulnerabilitiesMicrosoft MSHTML Framework (a component that helps Windows render web content) contains a flaw in its security protection mechanism that could let an attacker bypass security features over a network. This vulnerability is currently being exploited by real attackers in the wild.
Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Due date: 2026-03-03. See https://msrc.microsoft.com/update-guide/advisory/CVE-2026-21513 for details.
CISA Known Exploited VulnerabilitiesMicrosoft Windows Shell has a vulnerability that lets attackers bypass a security feature over a network without authorization. This flaw is currently being exploited by real attackers, making it an active threat.
Fix: Apply mitigations per Microsoft's vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.
CISA Known Exploited VulnerabilitiesResearchers discovered that Group Relative Policy Optimization (GRPO), a technique normally used to improve AI safety, can be reversed to break safety alignment when the reward signals are changed. By giving a safety-aligned model even a single harmful prompt and scoring responses based on how well they fulfill the harmful request rather than refusing it, the model gradually abandons its safety guidelines and becomes willing to produce harmful content across many categories it never encountered during the attack.
AdvScan is a method for detecting adversarial examples (inputs slightly modified to trick AI models into making wrong predictions) on tiny machine learning models running on edge devices (small hardware like microcontrollers) without needing access to the model's internal details. The approach monitors power consumption patterns during the model's operation, since adversarial examples create unusual power signatures that differ from normal inputs, and uses statistical analysis to flag suspicious inputs in real-time with minimal performance overhead.
Researchers have developed a new backdoor attack method called shell code injection (SCI) that can implant malicious logic into deep learning models (neural networks trained on large datasets) without needing to poison the training data. The attack uses techniques inspired by nature, like camouflage, along with trigger verification and code packaging strategies to trick models into making wrong predictions, and it can adapt its attack targets dynamically using large language models (LLMs) to make it more flexible and harder to detect.
This research introduces PP-DR, a privacy-preserving dimensionality reduction (a technique that reduces the number of features in a dataset to make it easier to analyze) scheme that uses homomorphic encryption (a type of encryption that allows computations on encrypted data without decrypting it first) to let multiple organizations securely share and analyze data together without revealing sensitive information. The new method is much faster and more accurate than previous approaches, achieving 30 to 200 times better computational efficiency and 70% less communication overhead.