BriefGPT.xyz
May, 2024
稀疏调整:用高效的微调和推理调整视觉Transformer
Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and Inference
HTML
PDF
Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang...
TL;DR
Sparse-Tuning是一种新的调优范式,通过稀疏保存信息标记并合并冗余标记,提高对前景的关注并降低背景区域的计算成本,实现了对预训练的ViT模型进行高效的微调和推断,同时具备了现有方法无法满足的GPU内存和时间效率要求。
Abstract
parameter-efficient fine-tuning
(PEFT) has emerged as a popular approach for adapting
pre-trained vision transformer
(ViT) models to downstream applications. While current PEFT methods achieve parameter efficienc
→