BriefGPT.xyz
Jun, 2024
消除LLM幻觉需要重新思考泛化
Banishing LLM Hallucinations Requires Rethinking Generalization
HTML
PDF
Johnny Li, Saksham Consul, Eda Zhou, James Wong, Naila Farooqui...
TL;DR
通过广泛系统实验,我们展示了传统方法无法解释LLMs在实践中为什么会产生幻觉,并通过大量内存专家的混合来增强LLMs,可以轻松地记忆大数据集,为去除幻觉设计了Lamini-1模型。
Abstract
Despite their powerful chat, coding, and reasoning abilities,
large language models
(LLMs) frequently hallucinate. Conventional wisdom suggests that
hallucinations
are a consequence of a balance between creativit
→