BriefGPT.xyz
Oct, 2021
基于中心损失的持续学习正则化
Center Loss Regularization for Continual Learning
HTML
PDF
Kaustubh Olpadkar, Ekta Gavas
TL;DR
我们提出采用中心损失作为正则化惩罚来保留旧任务的记忆,从而使神经网络能够在学习新任务的同时保持对旧任务的高性能表现。
Abstract
The ability to learn different tasks sequentially is essential to the development of artificial intelligence. In general,
neural networks
lack this capability, the major obstacle being
catastrophic forgetting
. It
→