BriefGPT.xyz
Sep, 2020
对话回复生成的多参考训练
Multi-Referenced Training for Dialogue Response Generation
HTML
PDF
Tianyu Zhao, Tatsuya Kawahara
TL;DR
该研究工作研究了如何构造多参考训练数据和使用具有表达性先验的 LGM 模型来提高对话模型的生成多对多关系的能力。
Abstract
In open-domain dialogue response generation, a dialogue context can be continued with diverse responses, and the
dialogue models
should capture such
one-to-many relations
. In this work, we first analyze the train
→