BriefGPT.xyz
Jun, 2023
快速视觉Transformer模型:具有分层注意力机制
FasterViT: Fast Vision Transformers with Hierarchical Attention
HTML
PDF
Ali Hatamizadeh, Greg Heinrich, Hongxu Yin, Andrew Tao, Jose M. Alvarez...
TL;DR
本文介绍了一种新的卷积神经网络和可见-感知变换神经网络的混合模型——FasterViT,利用HAT方法分层降低全局自注意力的计算复杂度,提高图像处理的吞吐量和效率。FasterViT在各种计算机视觉任务中得到了广泛的验证,并表现出比竞争对手更快,更准确的性能。
Abstract
We design a new family of
hybrid cnn-vit neural networks
, named
fastervit
, with a focus on high image throughput for
computer vision
(CV)
→