BriefGPT.xyz
Mar, 2024
隐私保护联邦学习中LoRA的改进
Improving LoRA in Privacy-preserving Federated Learning
HTML
PDF
Youbang Sun, Zitao Li, Yaliang Li, Bolin Ding
TL;DR
在隐私保护联邦学习中,本文提出了一种高效且有效的低秩适应方法 FFA-LoRA,通过固定非零矩阵并仅微调零矩阵,缓解了数据异构性、差分隐私增强噪声放大以及超参数敏感性等挑战,同时将通信成本减半,并在各种联邦学习任务中展现了更一致的性能和更好的计算效率。
Abstract
low-rank adaptation
(LoRA) is one of the most popular
task-specific parameter-efficient fine-tuning
(PEFT) methods on pre-trained language models for its good performance and computational efficiency. LoRA inject
→