BriefGPT.xyz
Mar, 2021
低资源领域自适应的逐步微调
Gradual Fine-Tuning for Low-Resource Domain Adaptation
HTML
PDF
Haoran Xu, Seth Ebner, Mahsa Yarmohammadi, Aaron Steven White, Benjamin Van Durme...
TL;DR
通过多阶段逐步微调的方式,不需要改变模型或学习目标,可以显著提高自然语言处理模型在目标领域的适应能力。
Abstract
fine-tuning
is known to improve
nlp models
by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain. Such
→