BriefGPT.xyz
Oct, 2024
用于防御深层泄露的梯度代替方案在联邦学习中的应用
Gradients Stand-in for Defending Deep Leakage in Federated Learning
HTML
PDF
H. Yi, H. Ren, C. Hu, Y. Li, J. Deng...
TL;DR
本研究解决了联邦学习中梯度泄露的安全问题,提出了一种新颖的防护方法"AdaDefense"。该方法通过替代实际局部梯度进行全局聚合,有效防止信息泄露,并保持模型性能基本不变。研究结果显示,该方法在保证模型完整性的同时,增强了联邦学习的隐私保护能力。
Abstract
Federated Learning
(FL) has become a cornerstone of
Privacy Protection
, shifting the paradigm towards localizing sensitive data while only sending model gradients to a central server. This strategy is designed to
→