BriefGPT.xyz
Sep, 2024
通过错误信息理解大型语言模型中的知识漂移
Understanding Knowledge Drift in LLMs through Misinformation
HTML
PDF
Alina Fastowski, Gjergji Kasneci
TL;DR
本研究解决了大型语言模型在接触错误信息时所产生的知识漂移问题。通过对模型在问答场景中对虚假信息反应的深入分析,提出了一种结合熵、困惑度和标记概率度量的方法。研究发现,模型在接触错误信息时,其不确定性可能增加高达56.6%,重复接触同一错误信息又可能导致不确定性降低,影响模型的原始知识,推动了对大型语言模型可应用性的可靠性发展。
Abstract
Large Language Models
(LLMs) have revolutionized numerous applications, making them an integral part of our digital ecosystem. However, their reliability becomes critical, especially when these models are exposed to
Mis
→