BriefGPT.xyz
Apr, 2023
联合 Token 剪枝与挤压:更激进的视觉 Transformer 压缩
Joint Token Pruning and Squeezing Towards More Aggressive Compression of Vision Transformers
HTML
PDF
Siyuan Wei, Tianzhu Ye, Shen Zhang, Yao Tang, Jiajun Liang
TL;DR
采用新型Token Pruning & Squeezing模块(TPS)可以更高效地压缩视觉转换器,提高模型的计算速度和图像分类精度。
Abstract
Although
vision transformers
(ViTs) have shown promising results in various
computer vision
tasks recently, their high computational cost limits their practical applications. Previous approaches that prune redund
→