BriefGPT.xyz
Jan, 2024
SkipViT:使用令牌级别的跳跃连接加速视觉变换
SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection
HTML
PDF
Foozhan Ataiefard, Walid Ahmed, Habib Hajimolahoseini, Saina Asani, Farnoosh Javadi...
TL;DR
我们的研究提出了一种方法来优化视觉转换器模型中不相关令牌之间的不必要交互数量,通过将它们分离并通过不同的低成本计算路径发送,同时在训练吞吐量上获得13%以上的提升,并在华为Ascend910A上维持与基准模型相同级别的分类准确性。
Abstract
vision transformers
are known to be more computationally and data-intensive than CNN models. These
transformer models
such as ViT, require all the input image tokens to learn the relationship among them. However,
→