BriefGPT.xyz
May, 2019
用于对抗性鲁棒性的可扩展输入梯度正则化
Scaleable input gradient regularization for adversarial robustness
HTML
PDF
Chris Finlay, Adam M Oberman
TL;DR
本文旨在探讨一种对抗性对抗性鲁棒性的梯度正则化方法,其中使用局部梯度信息得出新颖的理论鲁棒性边界,并利用可扩展的输入梯度正则化来训练出具有鲁棒性的ImageNet模型,同时实验证明输入梯度正则化与对抗性训练具有相似的训练效果。
Abstract
Input
gradient regularization
is not thought to be an effective means for promoting
adversarial robustness
. In this work we revisit this regularization scheme with some new ingredients. First, we derive new per-i
→