Use of deep learning (DL) in commercial applications such as image
classification, sentiment analysis and speech recognition is increasing. When
training DL models with large number of parameters and/or large datasets, cost
and speed of training can become prohibitive. Distributed DL t
本文研究了 Deep Learning 中在协作训练上的瓶颈,提出了一种适用于协作训练的新算法框架,并在实际条件下展示了该方法对 SwAV 和 ALBERT 的预训练效果,结果表明该方法的性能与传统设置相当,成本仅为传统设置的一小部分。最后,提供了一个由 40 名参与者组成的成功的协作语言模型预训练的详细报告。