All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
This item is a biography of Dr. Kamta Nath Mishra, an academic researcher with over 25 years of experience in computer science. While the title mentions a hybrid machine learning and cryptography model for cloud-IoT (internet of things, networked physical devices) security, the provided content contains only his educational background and career history with no technical details about the actual security research or any vulnerabilities.
LlamaIndex v0.14.18 is a release that deprecates Python 3.9 (stops supporting an older version of the Python programming language) across multiple packages and includes several bug fixes, such as preserving chat history during incomplete data streaming and preventing division-by-zero errors. The update also adds features like improved text filtering across different database backends and maintains dependencies across 51 directories.
The Bedrock AgentCore Starter Toolkit (a tool for building AI agents on AWS) before version v0.1.13 has a vulnerability where it doesn't properly verify S3 ownership (S3 is AWS's cloud storage service). This missing check could allow an attacker to inject malicious code during the build process (when the software is being compiled), potentially leading to code execution in the running application. The vulnerability only affects users who built the toolkit after September 24, 2025.
Memray versions 1.19.1 and earlier had a stored XSS vulnerability (a type of attack where malicious code is permanently stored and executed when viewed) in their HTML reports because command-line arguments were inserted directly into the HTML without escaping (converting special characters so they display as text rather than code). An attacker who could control a program's script name or command-line arguments could inject JavaScript that would execute when someone opened the generated report in a browser.
A vulnerability (CVE-2026-4270) exists in AWS API MCP Server versions 0.2.14 through 1.3.8, which is software that lets AI assistants interact with AWS services. The bug allows attackers to bypass file access restrictions (the security controls that limit which files an AI can read) and potentially read any file on the system, even when those restrictions are supposed to be enabled.
ONNX's onnx.hub.load() function has a security flaw where the silent=True parameter completely disables warnings and user confirmations when loading models from untrusted repositories (sources not officially verified). This means an attacker could trick an application into silently downloading and running malicious models from their own GitHub repository without the user knowing, potentially allowing theft of sensitive files like SSH keys or cloud credentials.
This is an interview with Yahoo CEO Jim Lanzone discussing Yahoo's business strategy, including its new AI-powered search tool called Scout, its advertising platform decisions, and portfolio changes like selling Engadget and TechCrunch. The article explains advertising technology concepts like SSPs (supply-side platforms, which let websites sell ad space) and DSPs (demand-side platforms, which let advertisers automatically buy ads across many sites), showing how Yahoo is shifting investment toward the more profitable DSP business model.
CVE-2026-26133 is a vulnerability in Microsoft 365 Copilot where an attacker can use AI command injection (tricking the AI system by embedding hidden commands in normal-looking input) to access and disclose information over a network without authorization. The vulnerability has a CVSS score (a 0-10 rating of how severe a security flaw is) of 4.0, indicating moderate severity.
CVE-2026-25083 is a missing authorization vulnerability in GROWI (a collaboration platform) affecting version 7.4.5 and earlier. A logged-in user who knows the identifier of a shared AI assistant can view and modify other users' conversation threads and messages without permission, because the API endpoints don't properly verify whether the user should have access. This is rated as HIGH severity with a CVSS score (a 0-10 scale measuring vulnerability severity) of 8.7.
Raytha CMS has a vulnerability where attackers can trick the server into sending password reset emails with links pointing to the attacker's domain instead of the legitimate one by spoofing HTTP headers (X-Forwarded-Host or Host, which tell the server what domain name was used to reach it). When a victim clicks the malicious link, their password reset token gets sent to the attacker, who can then reset their password and take over their account.
Raytha CMS has a vulnerability called SSRF (server-side request forgery, where an attacker tricks the server into making HTTP requests to unintended locations) in its "Themes - Import from URL" feature that allows high-privilege attackers to redirect the server's own HTTP requests. This vulnerability has a CVSS score (a 0-10 rating of how severe a vulnerability is) of 5.1, classified as medium severity.
CVE-2025-15060 is a remote code execution vulnerability in claude-hovercraft that allows attackers to run arbitrary code without needing to log in. The flaw exists in the executeClaudeCode method, which fails to properly validate user input before using it in a system call (a request to run operating system commands), allowing attackers to inject malicious commands.
MLflow versions before v3.7.0 contain a command injection vulnerability (a flaw where attackers insert malicious commands into input that gets executed) in the sagemaker module. An attacker can exploit this by passing a malicious container image name through the `--container` parameter, which the software unsafely inserts into shell commands and runs, allowing arbitrary command execution on affected systems.
This week's security news includes Google patching two actively exploited Chrome vulnerabilities in the graphics and JavaScript engines that could allow code execution, Meta discontinuing encrypted messaging on Instagram, and law enforcement disrupting botnets (malware networks that hijack routers) like SocksEscort and KadNap that were being used for fraud and illegal proxy services. A threat actor also exploited a compromised npm package (a JavaScript code library) to breach an AWS cloud environment and steal data.
Fix: Update to Bedrock AgentCore Starter Toolkit version v0.1.13 or later.
AWS Security BulletinsOpenAI has agreed to allow the Pentagon to use its AI technology in classified military environments, raising questions about potential applications in the escalating conflict with Iran. The article describes how OpenAI's generative AI (AI that can produce text, images, or other outputs based on patterns) could be used to help analyze potential military targets and prioritize strikes, as well as through a partnership with Anduril to defend against drone attacks, marking the first serious military testing of generative AI for real-time combat decisions.
Encyclopedia Britannica and Merriam-Webster sued OpenAI, claiming it used their copyrighted content to train ChatGPT without permission and that GPT-4 (OpenAI's AI model) now outputs text that closely matches their original material. The publishers allege that OpenAI 'memorized' their content during training, meaning the AI absorbed and can reproduce substantial portions of their work.
Fix: Upgrade to Memray 1.19.2, and avoid attaching Memray to untrusted processes until you have upgraded.
GitHub Advisory DatabaseFix: This issue was fixed in version 1.4.6.
NVD/CVE DatabaseFix: This issue was fixed in version 1.4.6.
NVD/CVE DatabaseFix: Update MLflow to version v3.7.0 or later.
NVD/CVE DatabaseFix: Google addressed the Chrome vulnerabilities in versions 146.0.7680.75/76 for Windows and macOS, and 146.0.7680.75 for Linux.
The Hacker NewsShadow AI refers to AI tools used throughout an organization without IT oversight or approval, creating security and governance challenges. The source describes Nudge Security as a platform that addresses this by providing continuous discovery of AI apps and user accounts, monitoring for sensitive data sharing in AI conversations, and tracking which AI tools have access to company data through integrations.
Fix: According to the source, Nudge Security delivers mitigation through: (1) a lightweight IdP (identity provider, the system that manages user identities) integration with Microsoft 365 or Google Workspace that takes less than 5 minutes to enable, which analyzes machine-generated emails to detect new AI accounts and tool adoption; (2) a browser extension for real-time monitoring of risky behaviors and alerts when sensitive data (PII, secrets, financial info) is shared with AI tools; (3) tracking of SaaS-to-AI integrations and their access scopes; and (4) configurable alerts for new AI tools or policy violations.
BleepingComputerThis article examines how large language models (AI systems trained on huge amounts of text data) can be used in cybersecurity red teaming (simulated attacks to test defenses) and blue teaming (defensive security work), mapping their abilities to established security frameworks. However, LLMs struggle in difficult, real-world situations because they have limitations like hallucinations (generating false information confidently), poor memory of long conversations, and gaps in logical reasoning.
Autonomous AI agents (AI systems that operate independently to complete complex tasks with minimal human oversight) have advanced rapidly, creating new governance challenges because they can operate at machine speed without humans in the loop to approve each decision. Unlike traditional chatbots where humans reviewed outputs before consequential actions, agents now directly modify enterprise systems and data, making organizations legally liable for any harm caused (similar to how parents are responsible for their children's actions). Without building governance rules directly into the code that controls these agents' permissions and actions, organizations face significant risks from drift (where agents behave differently than intended) and unauthorized access to critical systems.
Organizations typically use separate security tools (BAS tools, pentesting products, vulnerability scanners) that don't communicate with each other, creating blind spots because attackers chain multiple vulnerabilities together in coordinated operations. The article proposes that agentic AI (autonomous AI agents that can plan, execute, and reason through complex tasks without human direction at each step) should be applied to security validation to create a unified, continuous system that combines adversarial perspective (how attackers get in), defensive perspective (whether defenses stop them), and risk perspective (which exposures actually matter).