BriefGPT.xyz
May, 2024
MLAE: 参数高效微调的屏蔽LoRA专家
MLAE: Masked LoRA Experts for Parameter-Efficient Fine-Tuning
HTML
PDF
Junjie Wang, Guangjing Yang, Wentao Chen, Huahui Yi, Xiaohu Wu...
TL;DR
提出了一种名为Masked LoRA Experts (MLAE)的创新方法,通过参数高效微调、低秩矩阵的独立性增强和选择性激活等策略,以提高模型性能和知识多样性,从而实现了在VTAB-1k和FGVC基准测试上的最佳性能。
Abstract
In response to the challenges posed by the extensive parameter updates required for full fine-tuning of large-scale pre-trained models,
parameter-efficient fine-tuning
(PEFT) methods, exemplified by
low-rank adaptation<
→