BriefGPT.xyz
Jun, 2021
提升视觉Transformer的对抗传递性
On Improving Adversarial Transferability of Vision Transformers
HTML
PDF
Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, Fatih Porikli
TL;DR
本研究通过提出两种攻击策略,Self-Ensemble 和 Token Refinement,充分利用了 Vision Transformers 的自注意力和组合性质来增强对抗攻击的传递性能。
Abstract
vision transformers
(ViTs) process input images as sequences of patches via
self-attention
; a radically different architecture than convolutional neural networks (CNNs). This makes it interesting to study the adv
→