BriefGPT.xyz
Mar, 2023
图形转换器是否受益于更多层?
Are More Layers Beneficial to Graph Transformers?
HTML
PDF
Haiteng Zhao, Shuming Ma, Dongdong Zhang, Zhi-Hong Deng, Furu Wei
TL;DR
本文旨在研究图转换器的深度问题,发现当前的图转换器深度受到全局注意力瓶颈的制约,限制其对关键子结构和表达特征的关注。作者提出了一种名为DeepGraph的新型图转换器模型,结合子结构记号和局部注意力,以提高全局注意力对子结构的关注能力和表达能力,解决了自注意力深度问题,取得了在各种图形基准测试中的最新成果。
Abstract
Despite that going deep has proven successful in many neural architectures, the existing
graph transformers
are relatively shallow. In this work, we explore whether more layers are beneficial to
graph transformers
→