BriefGPT.xyz
Feb, 2023
深度神经网络的超低精度无乘训练
Ultra-low Precision Multiplication-free Training for Deep Neural Networks
HTML
PDF
Chang Liu, Rui Zhang, Xishan Zhang, Yifan Hao, Zidong Du...
TL;DR
本文提出了一种自适应分层比例缩放的位编码量化(ALS-POTQ)方法和无乘积MAC的方法(MF-MAC),可以消除线性层中所有FP32乘法和重量偏差校正和参数化比率裁剪技术来提高稳定性和提高准确性,从而获得比现有方法更高的能源效率和准确性。
Abstract
The training for deep
neural networks
(DNNs) demands immense energy consumption, which restricts the development of
deep learning
as well as increases carbon emissions. Thus, the study of
→