BriefGPT.xyz
Jul, 2019
And the Bit Goes Down: 重新审视神经网络量化
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
HTML
PDF
Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou
TL;DR
本文提出一种矢量量化方法,以减小卷积神经网络架构的存储占用,能以较小的内存占用提供高精度的图像识别。
Abstract
In this paper, we address the problem of reducing the memory footprint of ResNet-like
convolutional network architectures
. We introduce a
vector quantization method
that aims at preserving the quality of the reco
→