BriefGPT.xyz
Aug, 2024
动态自一致性:利用推理路径提高大型语言模型采样效率
Dynamic Self-Consistency: Leveraging Reasoning Paths for Efficient LLM Sampling
HTML
PDF
Guangya Wan, Yuqi Wu, Jie Chen, Sheng Li
TL;DR
本文针对大型语言模型(LLMs)在自一致性(SC)方法中导致的高计算成本问题,提出了一种新的早停框架——推理感知自一致性(RASC)。RASC通过动态调整样本生成数量并综合考虑输出答案及推理路径,显著降低采样使用量,同时在准确性上实现了最高达5%的提升。
Abstract
Self-Consistency
(SC) is a widely used method to mitigate hallucinations in
Large Language Models
(LLMs) by sampling the LLM multiple times and outputting the most frequent solution. Despite its benefits, SC resu
→