BriefGPT.xyz
Jun, 2023
用于视觉语言预训练模型的近似提示调优
Approximated Prompt Tuning for Vision-Language Pre-trained Models
HTML
PDF
Qiong Wu, Shubin Huang, Yiyi Zhou, Pingyang Dai, Annan Shu...
TL;DR
本研究提出了一种名为“Approximated Prompt Tuning”的方法,用以提高视觉语言预训练模型的迁移学习效率,其基于软提示令牌的独立信息扩散步骤,从而有效地避免了昂贵的全局关注建模,并显著降低了计算复杂度。
Abstract
prompt tuning
is a parameter-efficient way to deploy large-scale pre-trained models to downstream tasks by adding task-specific tokens. In terms of vision-language pre-trained (VLP) models,
prompt tuning
often re
→