Privacy Against Agnostic Inference Attacks in Vertical Federated Learning
Summary
This academic paper examines privacy risks in vertical federated learning (a machine learning approach where different organizations each hold different features of the same data and train a model together) when facing agnostic inference attacks (attacks where the attacker doesn't know the model's structure in advance). The paper analyzes how attackers could potentially infer private information from the shared computations in this system.
Classification
Related Issues
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
CVE-2025-54868: LibreChat is a ChatGPT clone with additional features. In versions 0.0.6 through 0.7.7-rc1, an exposed testing endpoint
Original source: https://dl.acm.org/doi/abs/10.1145/3808698?ai=2p1&mi=hx017f&af=R
First tracked: May 7, 2026 at 08:00 AM
Classified by LLM (prompt v3) · confidence: 85%