BriefGPT.xyz
Oct, 2020
单步对抗训练中的灾难性过拟合理解
Understanding Catastrophic Overfitting in Single-step Adversarial Training
HTML
PDF
Hoki Kim, Woojin Lee, Jaewook Lee
TL;DR
本文研究了如何防止在单步对抗训练中出现的“灾难性过拟合”的问题,并提出了一种简单的方法,不仅可以预防此问题,而且能够在单步对抗训练中防止多步对抗攻击。
Abstract
Adversarial examples are perturbed inputs that are designed to deceive machine-learning classifiers by adding adversarial perturbations to the original data. Although fast
adversarial training
have demonstrated both
rob
→