In this work, we develop a technique to produce counterfactual visual
explanations. Given a 'query' image $I$ for which a vision system predicts
class $c$, a counterfactual visual explanation identifies how $I$ could change
such that the system would output a different specified class