BriefGPT.xyz
Apr, 2022
用对比提示调整使预训练语言模型成为端到端的小样本学习模型
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
HTML
PDF
Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu...
TL;DR
CP-Tuning是第一个无需手动工程任务特定提示和说明符进行微调的端到端对比提示调整框架,它与任务不变的连续提示编码技术和完全可训练的提示参数相集成。
Abstract
pre-trained language models
(PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the
fine-tuning
process based on labeled training data. For low-resour
→