BriefGPT.xyz
Jan, 2022
发现图神经网络不变量的理性
Discovering Invariant Rationales for Graph Neural Networks
HTML
PDF
Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, Tat-Seng Chua
TL;DR
提出了一种新的发现不变因果解释的策略来构建固有可解释性的图神经网络,在合成数据集和实际数据集上的实验证明了该策略在图分类方面的可解释性和广义能力优于现有的基线模型。
Abstract
Intrinsic
interpretability
of
graph neural networks
(GNNs) is to find a small subset of the input graph's features -- rationale -- which guides the model prediction. Unfortunately, the leading rationalization mod
→