BriefGPT.xyz
Aug, 2024
CoRA:通过大型语言模型的共同子空间优化低秩适应
CoRA: Optimizing Low-Rank Adaptation with Common Subspace of Large Language Models
HTML
PDF
Xiaojun Xiao, Sen Shen, Qiming Bao, Hongfei Rong, Kairui Liu...
TL;DR
本研究解决了在大语言模型微调中低秩适应(LoRA)的计算资源浪费问题。提出的CoRA方法通过使用共享知识优化LoRA训练,显著减少了可训练参数且提高了性能。实验结果表明,该方法在保持效率的同时,提升了与原始LoRA微调相同参数下的效果。
Abstract
In
Fine-tuning
Large Language Models
(LLMs), conserving computational resources while maintaining effectiveness and improving outcomes within the same computational constraints is crucial. The
→