Deep neural networks (DNNs) are vulnerable to adversarial
examples-maliciously crafted inputs that cause DNNs to make incorrect
predictions. Recent work has shown that these attacks generalize to the
physical domain, to create perturbations on physical objects that fool image
classifiers under a variety of real-world conditions. Such attacks pose a risk
to d