BriefGPT.xyz
Sep, 2024
对齐双重漂移:全联邦原始对偶学习所需
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
HTML
PDF
Yan Sun, Li Shen, Dacheng Tao
TL;DR
本研究解决了传统联邦原始对偶方法在非凸场景中存在的"双重漂移"问题,该问题由部分参与训练下长期不活跃客户端的双重滞后引起。提出了一种新的对齐联邦原始对偶(A-FedPD)方法,通过构建虚拟双重更新,使全球共识与本地双重变量对齐,提高了优化与泛化效率,并通过广泛实验验证了该方法的有效性。
Abstract
As a popular paradigm for juggling
Data Privacy
and collaborative training,
Federated Learning
(FL) is flourishing to distributively process the large scale of heterogeneous datasets on edged clients. Due to band
→