BriefGPT.xyz
Dec, 2018
对抗样本分解
Adversarial Example Decomposition
HTML
PDF
Horace He, Aaron Lou, Qingxuan Jiang, Isay Katsman, Pian Pawakapan...
TL;DR
研究表明深度神经网络容易受到精心设计的对抗扰动的攻击,并且这些对抗扰动通常会在模型之间转移。本研究假设对抗脆弱性有三个来源:架构、数据集和随机初始化,并将对抗样本分解为架构相关、数据相关和噪声相关的三个分量,可以重新组合以提高可转移性而不影响原始模型的有效性。
Abstract
Research has shown that widely used
deep neural networks
are vulnerable to carefully crafted
adversarial perturbations
. Moreover, these
adversari
→