BriefGPT.xyz
May, 2022
通过元学习学习更好的软提示初始化
Learning a Better Initialization for Soft Prompts via Meta-Learning
HTML
PDF
Yukun Huang, Kun Qian, Zhou Yu
TL;DR
本文提出了 MetaPT,一种基于元学习的预训练方法来对话题进行自适应,并在7个下游任务中展示了比现有方法更好的性能和稳定性。
Abstract
prompt tuning
(PT) is an effective approach to adapting
pre-trained language models
to
downstream tasks
. Without a good initialization,
→