TL;DR通过引入数据的稀疏性到生成式分层模型中,我们展示了学习到的抽象表达和空间变换的不变性之间的强相关性,并解释了卷积神经网络在 Sparse Random Hierarchy Model(SRHM) 中的样本复杂性如何依赖于任务的稀疏性和分层结构。
Abstract
Understanding what makes high-dimensional data learnable is a fundamental question in machine learning. On the one hand, it is believed that the success of deep learning lies in its ability to build a