adversarial examples are a pervasive phenomenon of machine learning models
where seemingly imperceptible perturbations to the input lead to
misclassifications for otherwise statistically accurate models. We propose a
geometric framework, drawing on tools from the manifold reconstructio