deep learning models are vulnerable to various adversarial manipulations of
their training data, parameters, and input sample. In particular, an adversary
can modify the training data and model parameters to embed backdoors into the
model, so the model behaves according to the adversar