Current adversarial attack research reveals the vulnerability of
learning-based classifiers against carefully crafted perturbations. However,
most existing attack methods have inherent limitations in cross-dataset
generalization as they rely on a classification layer with a closed set