BriefGPT.xyz
May, 2023
通过权重归一化实现强大的隐式正则化
Robust Implicit Regularization via Weight Normalization
HTML
PDF
Hung-Hsu Chou, Holger Rauhut, Rachel Ward
TL;DR
本文研究了使用梯度下降与权重归一化进行训练的经过参数化的模型所具有的内在偏向性,并证明了权重归一化的方法可以在对角线性模型中具有稀疏解的内在偏向性。
Abstract
overparameterized models
may have many interpolating solutions;
implicit regularization
refers to the hidden preference of a particular optimization method towards a certain interpolating solution among the many.
→