data poisoning has been proposed as a compelling defense against facial
recognition models trained on Web-scraped pictures. Users can perturb images
they post online, so that models will misclassify future (unperturbed)
pictures. We demonstrate that this strategy provides a false sense