adversarial examples, generated by adding small but intentionally
imperceptible perturbations to normal examples, can mislead deep neural
networks (DNNs) to make incorrect predictions. Although much work has been done
on both adversarial attack and defense, a fine-grained understanding