BriefGPT.xyz
Apr, 2023
联邦学习中更多的通信量不会导致更小的泛化误差
More Communication Does Not Result in Smaller Generalization Error in Federated Learning
HTML
PDF
Romain Chor, Milad Sefidgaran, Abdellatif Zaidi
TL;DR
该研究针对联邦学习中的统计学习模型的普适性误差进行了研究,讨论了模型聚合的轮次对最终聚合模型的影响,并且提出了通过优化选择合适的聚合轮次来减少总体风险的可能性。
Abstract
We study the
generalization error
of
statistical learning models
in a
federated learning
(FL) setting. Specifically, there are $K$ devices
→