BriefGPT.xyz
May, 2023
使用递减本地SGD步骤的更快联邦学习
Faster Federated Learning with Decaying Number of Local SGD Steps
HTML
PDF
Jed Mills, Jia Hu, Geyong Min
TL;DR
本文提出了强凸目标函数下联邦平均算法(FedAvg)的调整方法,即通过衰减每轮训练中的 Stochastic Gradient Descent 步数 K,以改善联邦学习模型的收敛性能,并在四个基准 FL 数据集上进行了实验验证。
Abstract
In
federated learning
(FL) client devices connected over the internet collaboratively train a machine learning model without sharing their private data with a central server or with other clients. The seminal Federated Averaging (
→