BriefGPT.xyz
Nov, 2018
RePr: 卷积滤波器的改进训练
RePr: Improved Training of Convolutional Filters
HTML
PDF
Aaditya Prakash, James Storer, Dinei Florencio, Cha Zhang
TL;DR
研究表明,通过暂时修剪和恢复模型的子集滤波器,反复进行该过程,可以减少所学习特征的重叠,从而提高了模型的泛化能力;而在这种情况下,现有的模型修剪标准并不是选择修剪滤波器的最优策略,因此引入了滤波器之间内部正交性作为排名标准。这种方法适用于各种类型的卷积神经网络,能够提高各种任务的性能,尤其是小型网络的性能。
Abstract
A well-trained
convolutional neural network
can easily be pruned without significant loss of performance. This is because of unnecessary
overlap
in the features captured by the network's filters. Innovations in n
→