BriefGPT.xyz
Nov, 2020
预训练语言模型是否可应用于基础开放领域对话?
Are Pre-trained Language Models Knowledgeable to Ground Open Domain Dialogues?
HTML
PDF
Yufan Zhao, Wei Wu, Can Xu
TL;DR
通过使用预训练语言模型,我们尝试研究知识相关对话生成的相关性,发现在使用仅包含少量知识对话的细调过程中,预训练语言模型可以超越需要外部知识的最先进的模型,在自动评估和人类判断方面表现更好。
Abstract
We study
knowledge-grounded dialogue generation
with
pre-trained language models
. Instead of pursuing new state-of-the-art on benchmarks, we try to understand if the knowledge stored in parameters of the pre-trai
→