BriefGPT.xyz
Jan, 2024
无位置编码的图形变换器
Graph Transformers without Positional Encodings
HTML
PDF
Ayush Garg
TL;DR
Eigenformer通过一种新颖的对Laplacian谱意识的注意机制,在一些标准的图神经网络基准数据集上实现了与最先进的MP-GNN体系结构和Graph Transformers相当的性能,甚至在某些数据集上超越了最先进的方法。此外,我们发现我们的架构在训练速度方面要快得多,可能是由于内在的图归纳偏置。
Abstract
Recently,
transformers
for
graph representation learning
have become increasingly popular, achieving state-of-the-art performance on a wide-variety of datasets, either alone or in combination with
→