BriefGPT.xyz
May, 2022
针对具有鉴别力的预训练语言模型的提示调节
Prompt Tuning for Discriminative Pre-trained Language Models
HTML
PDF
Yuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie...
TL;DR
该论文提出了DPT作为针对区分性PLMs的prompt tuning框架,并将自然语言处理任务转换为区分性语言建模问题。通过全面的文本分类和问答实验表明,与vanilla fine-tuning相比,DPT在全集和低资源环境下都能显著提高性能,并解决了调整大型PLMs中的不稳定问题。
Abstract
Recent works have shown promising results of
prompt tuning
in stimulating
pre-trained language models
(PLMs) for natural language processing (
nlp
→