BriefGPT.xyz
Mar, 2024
大型多模态模型中减轻幻觉效应的反事实植入
What if...?: Counterfactual Inception to Mitigate Hallucination Effects in Large Multimodal Models
HTML
PDF
Junho Kim, Yeon Ju Kim, Yong Man Ro
TL;DR
使用反事实关键词和双模式验证过程,该论文提出了一种增强大型多模态模型在处理错误回应和无关回应时的可靠性的方法,以减轻幻觉现象并提高模型的可信度。
Abstract
This paper presents a way of enhancing the reliability of
large multimodal models
(LMMs) in addressing
hallucination effects
, where models generate incorrect or unrelated responses. Without additional instruction
→