BriefGPT.xyz
Oct, 2022
对话响应选择任务自适应预训练
On Task-Adaptive Pretraining for Dialogue Response Selection
HTML
PDF
Tzu-Hsiang Lin, Ta-Chung Chi, Anna Rumshisky
TL;DR
本研究旨在验证先前论文中提出的关于初始化选择的假设和理解DRS改进的来源,研究表明使用RoBERTa初始化的性能与BERT类似,而MLM+NSP可以优于先前提出的所有TAP任务,并且NSP任务对于DRS非常重要,与常见的NLU任务不同,通过TAP步骤是DRS改进的主要来源。
Abstract
Recent advancements in
dialogue response selection
(DRS) are based on the \textit{
task-adaptive pre-training
(TAP)} approach, by first initializing their model with BERT~\cite{devlin-etal-2019-bert}, and adapt to
→