BriefGPT.xyz
Sep, 2024
多出口调优:面向视觉变换器的高效推理适应
Multiple-Exit Tuning: Towards Inference-Efficient Adaptation for Vision Transformer
HTML
PDF
Zheng Liu, Jinchao Zhu, Nannan Li, Gao Huang
TL;DR
本研究解决了现有参数高效迁移学习方法在推理阶段对简单样本计算资源分配过多的问题。提出的多出口调优(MET)方法通过集成多个出口到视觉变换器中,使简单样本能在早期出口退出,从而提高推理效率。实验结果表明,MET在准确性和推理效率上优于现有的先进方法。
Abstract
Parameter-efficient Transfer Learning
(PETL) has shown great potential in adapting a
Vision Transformer
(ViT) pre-trained on large-scale datasets to various downstream tasks. Existing studies primarily focus on m
→