BriefGPT.xyz
Jul, 2017
神经网络学习复杂性
On the Complexity of Learning Neural Networks
HTML
PDF
Le Song, Santosh Vempala, John Wilmes, Bo Xie
TL;DR
本文研究神经网络的理论解释,针对单个隐藏层、平滑激活函数和良好输入分布条件下生成的数据可否进行有效学习,证明了对于广泛的激活函数和任何对数凹分布的输入,存在一类单隐藏层函数,其输出为和门,难以以任何精度有效地学习,这一下界对权重的微小扰动具有鲁棒性,且通过实验验证了训练误差的相变现象。
Abstract
The stunning empirical successes of
neural networks
currently lack rigorous
theoretical explanation
. What form would such an explanation take, in the face of existing complexity-theoretic
→