BriefGPT.xyz
Feb, 2020
语言模型的预训练和微调目标的对齐
Aligning the Pretraining and Finetuning Objectives of Language Models
HTML
PDF
Nuo Wang Pierse, Jingwen Lu
TL;DR
本文研究了语言模型训练中显式地将预训练目标与微调目标对齐,发现这种目标对齐能够显著提高微调任务的性能,并降低微调所需的最小样本量,从而使模型更加精简高效。作者称其为 Few Example learning,能够为实时应用和减少人工标注成本提供帮助。
Abstract
We demonstrate that explicitly aligning the
pretraining objectives
to the
finetuning objectives
in
language model training
significantly i
→