BriefGPT.xyz
Sep, 2024
检索增强生成中的上下文压缩:大型语言模型的综述
Contextual Compression in Retrieval-Augmented Generation for Large Language Models: A Survey
HTML
PDF
Sourav Verma
TL;DR
本研究主要解决大型语言模型(LLMs)在生成内容时面临的幻觉、知识陈旧和推理不清等问题。通过检索增强生成(RAG)技术,结合LLMs的内在知识与外部数据库,本文提出了一种新的上下文压缩范式,并分析其演变和当前挑战,为未来的研究方向指明了道路。
Abstract
Large Language Models
(LLMs) showcase remarkable abilities, yet they struggle with limitations such as hallucinations, outdated knowledge, opacity, and inexplicable reasoning. To address these challenges,
Retrieval-Augm
→