BriefGPT.xyz
Apr, 2022
谈话模型产生幻觉的起因:是数据集还是模型?
On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?
HTML
PDF
Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, Siva Reddy
TL;DR
这篇论文研究了知识驱动的对话模型中的幻觉问题,通过广泛的人类研究发现标准测试数据集中有超过60%的幻觉响应,导致模型产生幻觉现象。提出了关于训练数据和模型质量的重要问题,并为未来的研究提供了公开的批注。
Abstract
knowledge-grounded conversational models
are known to suffer from producing factually invalid statements, a phenomenon commonly called
hallucination
. In this work, we investigate the underlying causes of this phe
→