deep neural networks have been demonstrated to be vulnerable to backdoor
attacks. Specifically, by injecting a small number of maliciously constructed
inputs into the training set, an adversary is able to plant a backdoor into the
trained model. This backdoor can then be activated duri