model attribution is a popular tool to explain the rationales behind model
predictions. However, recent work suggests that the attributions are vulnerable
to minute perturbations, which can be added to input samples to fool the
attributions while maintaining the prediction outputs. Alt