BriefGPT.xyz
Aug, 2023
LoRA-FA: 内存高效的大语言模型低秩适应微调
LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning
HTML
PDF
Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, Bo Li
TL;DR
LoRA-FA采用低内存量的权重更新方式,用于大型语言模型的微调,具有接近完整参数微调的准确性,降低了内存使用,技术优化了LoRA。
Abstract
The
low-rank adaptation
(LoRA) method can largely reduce the amount of trainable parameters for
fine-tuning
large language models
(LLMs),
→