data poisoning attacks, in which an adversary corrupts a training set with
the goal of inducing specific desired mistakes, have raised substantial
concern: even just the possibility of such an attack can make a user no longer
trust the results of a learning system. In this work, we sho