TL;DR本文提出了一种名为“选择性上下文”的方法,利用自身信息来过滤 less informative 的内容,并在不同数据源上展示了提高固定上下文长度效率的有效性。
Abstract
large language models (LLMs) have received significant attention by achieving remarkable performance across various tasks. However, their fixed context length poses challenges when processing long documents or maintaining extended conversations. This paper proposes a method called \tex