Guangtao Wang, Rex Ying, Jing Huang, Jure Leskovec
TL;DR提出了一种名为 MAGNA 的多跳图神经网络,该网络通过扩散先前层中的注意力分数以增加每一层的感受野,并在节点未直接相连时利用扩散先验的注意值捕获网络上下文信息,实验结果证明该网络在节点分类以及知识图谱完成基准测试上均达到了表现最佳。
Abstract
self-attention mechanism in graph neural networks (GNNs) led to
state-of-the-art performance on many graph representation learning tasks.
Currently, at every layer, attention is computed between connected pairs o