BriefGPT.xyz
Mar, 2021
通过偏差-方差分解了解对抗训练的泛化
Understanding Generalization in Adversarial Training via the Bias-Variance Decomposition
HTML
PDF
Yaodong Yu, Zitong Yang, Edgar Dobriban, Jacob Steinhardt, Yi Ma
TL;DR
通过偏差方差分解研究对抗性训练扰动半径对模型测试误差的影响,发现模型的偏差随扰动半径增加单调增加,方差则在训练集插值阈值附近单峰;同时,偏差和方差可用于指导缩小泛化间隔的方法,即预训练和使用无标签数据。
Abstract
Adversarially trained models exhibit a large
generalization gap
: they can interpolate the training set even for large
perturbation radii
, but at the cost of large test error on clean samples. To investigate this
→