BriefGPT.xyz
Sep, 2022
预训练语言模型为什么比零/少样本学习者更好?
What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
HTML
PDF
Jinghui Lu, Rui Zhao, Brian Mac Namee, Dongsheng Zhu, Weidong Han...
TL;DR
本文提出了一种理论框架,以解释在零/少样本场景下提示学习的功效,我们进一步假设语言差异可以衡量提示的质量,并且通过基于perplexity的注释无关模板选择方法,使我们能够提前预测提示性能。
Abstract
In this paper, we propose a theoretical framework to explain the efficacy of
prompt learning
in
zero/few-shot scenarios
. First, we prove that conventional
→