BriefGPT.xyz
Feb, 2025
REFIND:增检索的事实幻觉检测在大语言模型中的应用
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
HTML
PDF
DongGeon Lee, Hwanjo Yu
TL;DR
本文研究了解决大型语言模型输出中的幻觉问题,这些幻觉在知识密集型任务中严重影响其可靠性。提出的REFIND框架利用检索到的文档来检测输出中的幻觉,并引入了上下文敏感性比率(CSR)这一新指标,显示了在多种语言环境下优于现有方法的检测性能,从而提高了大语言模型的信任度。
Abstract
Hallucinations in large language model (LLM) outputs severely limit their reliability in knowledge-intensive tasks such as question answering. To address this challenge, we introduce REFIND (
Retrieval-Augmented
Factuali
→