BriefGPT.xyz
Oct, 2021
P-Tuning v2:提示调节在规模和任务上可以与微调相媲美
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
HTML
PDF
Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang...
TL;DR
通过优化,使用 P-Tuning v2 方法能够在广泛的模型尺度和自然语言理解任务中取得与微调相当的性能,只需调整 0.1%-3% 的参数。
Abstract
prompt tuning
, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of
nlu
, prior work and our results rev
→