machine learning models are powerful but fallible. Generating adversarial
examples - inputs deliberately crafted to cause model misclassification or
other errors - can yield important insight into model assumptions and
vulnerabilities. Despite significant recent work on adversarial exa