BriefGPT.xyz
Feb, 2024
ApiQ: 2位量化大型语言模型的精调
ApiQ: Finetuning of 2-Bit Quantized Large Language Model
HTML
PDF
Baohao Liao, Christof Monz
TL;DR
通过引入一种名为ApiQ的新型量化框架,本文解决了在大型语言模型中进行内存高效微调时,量化过程对预训练模型的知识损失以及错误传播所造成的问题,从而实现了在各种量化位宽下始终取得卓越的微调结果。
Abstract
memory-efficient finetuning
of
large language models
(LLMs) has recently attracted huge attention with the increasing size of LLMs, primarily due to the constraints posed by GPU memory limitations and the compara
→