BriefGPT.xyz
Feb, 2023
使用梯度分解增强联邦学习的拜占庭容错能力
GAIN: Enhancing Byzantine Robustness in Federated Learning with Gradient Decomposition
HTML
PDF
Yuchen Liu, Chen Chen, Lingjuan Lyu, Fangzhao Wu, Sai Wu...
TL;DR
针对联邦学习中的拜占庭攻击和不同数据分布下聚合规则的局限性问题,提出一种名为GAIN的梯度分解方案,能够适应异构数据集并改进现有鲁棒算法的并行训练方法。
Abstract
federated learning
provides a privacy-aware learning framework by enabling participants to jointly train models without exposing their private data. However,
federated learning
has exhibited vulnerabilities to
→