BriefGPT.xyz
Mar, 2022
Input-Tuning: 适应冻结预训练模型的不熟悉输入
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
HTML
PDF
Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen...
TL;DR
本文提出了input-tuning的概念,旨在通过fine-tuning连续提示和输入表示来更有效地适应陌生的自然语言生成任务输入,实验证明它可以显著且一致地胜过prompt-tuning。
Abstract
Recently the
prompt-tuning
paradigm has attracted significant attention. By only tuning continuous prompts with a frozen
pre-trained language model
(PLM),
→