BriefGPT.xyz
May, 2023
不要停止预训练?使基于提示的微调更加强大的学习者
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
HTML
PDF
Zhengxiang Shi, Aldo Lipani
TL;DR
本研究探讨了传统的预训练语言模型在特定任务下加以微调是否能提高性能的假设,并提出了基于提示的持续预训练方法(PCP)。实验证明,相较于传统方法,PCP在21个基准测试中表现更好。
Abstract
language models
(LMs) trained on vast quantities of unlabelled data have greatly advanced the field of
natural language processing
(NLP). In this study, we re-visit the widely accepted notion in NLP that continue
→