BriefGPT.xyz
Jan, 2019
深度神经网络的超低精度训练中的位宽累积缩放
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
HTML
PDF
Charbel Sakr, Naigang Wang, Chia-Yu Chen, Jungwook Choi, Ankur Agrawal...
TL;DR
通过统计方法分析深度学习中的累加器准确性,得到了将计算硬件精度进行精确定制的方案,并证明了这种方案可以得到面积和功率最优的系统。
Abstract
Efforts to reduce the numerical
precision
of computations in
deep learning training
have yielded systems that aggressively quantize weights and activations, yet employ wide high-
→