BriefGPT.xyz
Nov, 2018
关于卷积神经网络中隐式滤波器级别稀疏性的研究
On Implicit Filter Level Sparsity in Convolutional Neural Networks
HTML
PDF
Dushyant Mehta, Kwang In Kim, Christian Theobalt
TL;DR
研究表明,使用Batch Normalization和ReLU激活的卷积神经网络,在采用适应性梯度下降和L2正则化或权重衰减训练的情况下,会出现滤波器级别的稀疏性,这种隐式的稀疏性可以利用以达到神经网络加速的效果。
Abstract
We investigate
filter level sparsity
that emerges in
convolutional neural networks
(CNNs) which employ
batch normalization
and
→