Since the discovery of adversarial examples - the ability to fool modern CNN
classifiers with tiny perturbations of the input, there has been much
discussion whether they are a "bug" that is specific to current neural
architectures and training methods or an inevitable "feature" of hig