BriefGPT.xyz
Nov, 2015
为何多个网络头胜过单独一个:训练深度神经网络的多样集成
Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks
HTML
PDF
Stefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall, Dhruv Batra
TL;DR
本研究探讨了如何最佳地建立卷积神经网络的组合,比较了多种资源共享和差异性鼓励的策略,其中提出了TreeNets算法,最后通过端到端的训练以统一的损失函数获得比传统算法更高的准确率。
Abstract
convolutional neural networks
have achieved state-of-the-art performance on a wide range of tasks. Most benchmarks are led by ensembles of these powerful learners, but
ensembling
is typically treated as a post-ho
→