BriefGPT.xyz
Feb, 2025
通过交替优化LoRA实现大规模语言模型的鲁棒联邦微调
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
HTML
PDF
Shuangyi Chen, Yuanxin Guo, Yue Ju, Harik Dalal, Ashish Khisti
TL;DR
本研究解决了现有联邦训练方法效率低下的问题,提出了一种名为RoLoRA的新框架,通过交替优化来微调LoRA适配器。通过理论分析和广泛实验,证明RoLoRA在模型更新质量和表达能力方面优于先前的方法,展示了在多任务和大规模模型上的显著优势。
Abstract
Parameter-Efficient
Fine-Tuning
(PEFT) methods like Low-Rank Adaptation (
LoRA
) optimize federated training by reducing computational and communication costs. We propose RoLoRA, a federated framework using alterna
→