BriefGPT.xyz
Jun, 2023
使用过参数化的浅层ReLU神经网络进行非参数回归
Nonparametric regression using over-parameterized shallow ReLU neural networks
HTML
PDF
Yunfei Yang, Ding-Xuan Zhou
TL;DR
对于从某些光滑函数类中学习函数的任务,如果权重限制或正则化得当,超参数化神经网络可以实现最小极值收敛率(加上对数因子)。
Abstract
It is shown that
over-parameterized neural networks
can achieve
minimax optimal rates
of convergence (up to logarithmic factors) for learning functions from certain
→