Nov, 2017
MagNet 和 "高效抵御对抗攻击" 对抗对抗性样本的鲁棒性不足
MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Nicholas Carlini, David Wagner
TL;DR研究发现 MagNet 模型和高效防御模型并不能完全对抗有轻微失真的对抗样本攻击。
Abstract
magnet and "efficient defenses..." were recently proposed as a defense to
adversarial examples. We find that we can construct
发现论文,激发创造
MagNet:对抗样本的双重防御
MagNet is proposed as a defense mechanism for neural network classifiers against adversarial examples in deep learning, which learns to differentiate between normal and adversarial examples by approximating the manifold of normal examples and reconstructing adversarial examples by moving them towards the manifold, and it also proposes a mechanism to defend against graybox attack by using diversity to strengthen MagNet.
May, 2017
倡导采用多个防御策略对抗对抗性示例
本文通过几何分析和数值实验,探讨防御机制在保护神经网络免受 $\ell_2$ 对抗样本攻击效果不佳的问题,并评估已有的混合防御策略对于对抗样本攻击的现实意义和潜在问题。
Dec, 2020
CVPR 2018 白盒子对抗攻击防御方法的鲁棒性研究
本研究针对 2018 CVPR 中提出的两种白盒防御策略进行评估,发现它们并不有效,通过现有技术可以将被防御的神经网络模型的准确率降至 0%。
Apr, 2018
高效的对抗攻击防御
本文提出了一种基于实践观察的新的防御方法,旨在强化深度神经网络的结构,提高其预测稳定性,从而更难受到针对性攻击,并在多种攻击实验中证明了该方法的有效性,相比其他防御方法具有更好的表现,而且在训练过程中的开销几乎可以忽略不计。
Jul, 2017