BriefGPT.xyz
Nov, 2023
基于调整逻辑回归的Softmax的在线持续学习
Online Continual Learning via Logit Adjusted Softmax
HTML
PDF
Zhehao Huang, Tao Li, Chenhe Yuan, Yingwen Wu, Xiaolin Huang
TL;DR
我们提出了一种在线持续学习的方法,在训练过程中通过调整模型的logits来抵抗类别先验偏差并追求贝叶斯最优分类器,从而有效缓解了类间不平衡对模型性能的影响。
Abstract
online continual learning
is a challenging problem where models must learn from a non-stationary data stream while avoiding
catastrophic forgetting
.
→