BriefGPT.xyz
Feb, 2021
面向通信效率的联邦学习中模型更新的自适应量化
Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning
HTML
PDF
Divyansh Jhunjhunwala, Advait Gadhikar, Gauri Joshi, Yonina C. Eldar
TL;DR
本文研究提出了AdaQuant FL,一种自适应量化策略,旨在通过在训练过程中改变量化级别的数量来实现通信效率以及低误差率。 实验表明,与固定量化级别设置相比,该方法可以在更少的通信比特数中收敛,几乎不会对训练和测试的准确性产生影响。
Abstract
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in
federated learning
, especially in bandwidth-limited settings and high-dimensional models.
gradient quantiza
→