BriefGPT.xyz
Oct, 2021
关于训练图卷积网络深度的可证明优势
On Provable Benefits of Depth in Training Graph Convolutional Networks
HTML
PDF
Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
TL;DR
本文针对图卷积网络(GCNs)在层数增多时表现下降的问题进行了研究,发现合理训练后更深的模型具有极高的训练准确性,但泛化能力较差。通过分析GCNs的泛化能力,本文提出了一种解耦结构,使得GCNs既能保留表达能力,又能保证较好的泛化性能。各种合成和真实数据集的实证评估证实了这一理论。
Abstract
graph convolutional networks
(GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to
over-smoothing
. Despite the apparent consensus, we observe th
→