BriefGPT.xyz
Apr, 2022
DaViT: 双注意力视觉Transformer
DaViT: Dual Attention Vision Transformers
HTML
PDF
Mingyu Ding, Bin Xiao, Noel Codella, Ping Luo, Jingdong Wang...
TL;DR
本文提出Dual Attention Vision Transformers (DaViT)网络,该网络通过自我注意机制能够捕获全局信息,同时保持计算效率,并在图片分类任务上取得了最先进的表现。
Abstract
In this work, we introduce
dual attention vision transformers
(DaViT), a simple yet effective vision transformer architecture that is able to capture
global context
while maintaining computational efficiency. We
→