Jul, 2024
ADFQ-ViT: 视觉Transformer的激活分布友好的后训练量化
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization
for Vision Transformers
TL;DR提出了一种名为ADFQ-ViT的新型框架,通过引入Per-Patch Outlier-aware Quantizer、Shift-Log2 Quantizer和Attention-score enhanced Module-wise Optimization等方法,对Vision Transformers中的针对post-LayerNorm和post-GELU activations的离散化进行了改进,从而在4位情况下,在图像分类、目标检测和实例分割任务中明显提高了性能。