With the continue development of convolutional neural networks (CNNs), there
is a growing concern regarding representations that they encode internally.
Analyzing these internal representations is referred to as model
interpretation. While the task of model explanation, justifying the