BriefGPT.xyz
Jun, 2022
在线连续学习中的样本压缩
Sample Condensation in Online Continual Learning
HTML
PDF
Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu
TL;DR
本文提出了一种新的重放式连续学习策略OLCGM,使用知识压缩技术持续压缩记忆并更好地利用其有限的内存大小,取得了比现有重放策略更高的最终准确性。
Abstract
online continual learning
is a challenging learning scenario where the model must learn from a non-stationary stream of data where each sample is seen only once. The main challenge is to incrementally learn while avoiding
→