BriefGPT.xyz
May, 2019
分布式非凸优化中通信高效动量随机梯度下降的线性加速分析
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization
HTML
PDF
Hao Yu, Rong Jin, Sen Yang
TL;DR
本文研究了分布式优化方法在深度学习中的应用,发现分布式动量随机梯度下降在性能、通讯效率方面存在一定优势,并证明其拥有与分布式随机梯度下降相同的线性加速性质。
Abstract
Recent developments on large-scale
distributed machine learning
applications, e.g., deep neural networks, benefit enormously from the advances in distributed non-convex
optimization techniques
, e.g., distributed
→