BriefGPT.xyz
Nov, 2022
使用选择性屏蔽作为预训练和微调之间的桥梁
Using Selective Masking as a Bridge between Pre-training and Fine-tuning
HTML
PDF
Tanish Lad, Himanshu Maheshwari, Shreyas Kottukkal, Radhika Mamidi
TL;DR
该研究提出了一种使用任务特定掩码的方法,通过修改预训练的BERT模型来适应下游任务。研究结果表明,这种选择性掩码策略优于随机掩码,具有较好的效果。
Abstract
pre-training
a language model and then fine-tuning it for
downstream tasks
has demonstrated state-of-the-art results for various NLP tasks.
pre-t
→