BriefGPT.xyz
Jul, 2022
利用输入压缩来界定广义误差:无限宽度神经网络的实证研究
Bounding generalization error with input compression: An empirical study with infinite-width networks
HTML
PDF
Angus Galloway, Anna Golubeva, Mahmoud Salem, Mihai Nica, Yani Ioannou...
TL;DR
本文探讨了利用输入和最终层表示之间的互信息来估算深度神经网络广义误差,并使用输入压缩边界将互信息和广义误差联系起来,证明其在许多情况下表现优异,有助于消除试错的过程。
Abstract
Estimating the
generalization error
(GE) of
deep neural networks
(DNNs) is an important task that often relies on availability of held-out data. The ability to better predict GE based on a single training set may
→