Challenges persist in providing interpretable explanations for neural network
reasoning in explainable AI (xAI). Existing methods like Integrated Gradients
produce noisy maps, and LIME, while intuitive, may deviate from the model's
reasoning. We introduce a framework that uses hierarch