BriefGPT.xyz
Feb, 2022
通过不确定性驱动的扰动提高泛化能力
Improving Generalization via Uncertainty Driven Perturbations
HTML
PDF
Matteo Pagliardini, Gilberto Manunza, Martin Jaggi, Michael I. Jordan, Tatjana Chavdarova
TL;DR
本文讨论了梯度下降算法中的简单性偏差问题,并提出一种基于不确定性驱动的扰动方式来减小这种偏差。我们发现该方法能够提高模型的边际及泛化性能,同时在多个数据集上表现出有竞争力的鲁棒性和泛化性的平衡。
Abstract
Recently Shah et al., 2020 pointed out the pitfalls of the
simplicity bias
- the tendency of
gradient-based algorithms
to learn simple models - which include the model's high sensitivity to small input perturbati
→