The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Model Context Protocol Security Gaps Highlighted: MCP (a system that connects AI agents to data sources) has gained business adoption but faces serious risks including prompt injection (tricking an AI by hiding instructions in its input), token theft, and data leaks. Despite recent improvements like OAuth support and an official registry, organizations still lack adequate tools for access controls, authorization checks, and detailed logging to protect sensitive data.
This research paper addresses generalized out-of-distribution detection (OOD detection, where an AI system identifies inputs that are very different from its training data), which is important for AI systems used in safety-critical applications. Rather than focusing on designing better scoring functions, the authors propose a new decision rule called the generalized Benjamini Hochberg procedure that uses hypothesis testing (a statistical method for making decisions about data) to determine whether an input is out-of-distribution, and they prove this method controls false positive rates better than traditional threshold-based approaches.