BriefGPT.xyz
Nov, 2024
基于双低秩适应的预训练模型持续学习
Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Models
HTML
PDF
Huancheng Chen, Jingtao Li, Nidham Gazagnadou, Weiming Zhuang, Chen Chen...
TL;DR
本研究针对持续学习领域中的灾难性遗忘问题,提出了一种新颖的方法——双低秩适应(DualLoRA)。该方法通过在每层中引入正交和残差LoRA适配器,结合动态记忆机制,可同时提高模型的稳定性和灵活性。在多项基准测试中,DualLoRA在准确性、推理速度和内存效率上都显著优于现有的方法。
Abstract
In the era of foundation models, we revisit
Continual Learning
~(CL), which aims to enable
Vision Transformers
(ViTs) to learn new tasks over time. However, as the scale of these models increases, catastrophic for
→