BriefGPT.xyz
Jan, 2018
利用和保持卷积网络中的稀疏性的推断、学习和注意力机制
Inference, Learning and Attention Mechanisms that Exploit and Preserve Sparsity in Convolutional Networks
HTML
PDF
Timo Hackel, Mikhail Usvyatsov, Silvano Galliani, Jan D. Wegner, Konrad Schindler
TL;DR
本文介绍一种利用CNNs 去处理罕见数据的工具套件,包括直接稀疏卷积、注意力机制避免填充,以及适用于标准学习框架的反向传播算法改进,可以实现比传统密集框架更低的内存足迹和计算时间。
Abstract
While
cnns
naturally lend themselves to densely sampled data, and sophisticated implementations are available, they lack the ability to efficiently process sparse data. In this work we introduce a suite of tools that exploit
→