deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks,
in which an adversary implants a secret trigger into an otherwise normally
functioning model. Detection of backdoors in trained models without access to
the training data or example triggers is an important ope