BriefGPT.xyz
May, 2023
强化经验重放的连续学习
Continual Learning with Strong Experience Replay
HTML
PDF
Tao Zhuo, Zhiyong Cheng, Zan Gao, Mohan Kankanhalli
TL;DR
本研究提出了一种基于强化经验回放的连续学习方法,通过使用当前训练数据模仿未来经验,以及蒸馏内存缓冲区的过去经验,来提高模型的预测一致性,从而有效保留已获得的知识。实验结果表明,我们的方法在多个图像分类数据集上优于现有方法。
Abstract
continual learning
(CL) aims at incrementally learning new tasks without forgetting the knowledge acquired from old ones.
experience replay
(ER) is a simple and effective rehearsal-based strategy, which optimizes
→