BriefGPT.xyz
Dec, 2017
通过 $L_0$ 正则化学习稀疏神经网络
Learning Sparse Neural Networks through $L_0$ Regularization
HTML
PDF
Christos Louizos, Max Welling, Diederik P. Kingma
TL;DR
本研究提出了一种使用稀疏性约束进行神经网络剪枝的方法,该方法通过一系列随机门来收缩网络,以便训练和预测运算可以更加快速和高效。
Abstract
We propose a practical method for $L_0$ norm regularization for
neural networks
:
pruning
the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it
→