BriefGPT.xyz
Sep, 2018
StackNet: 续学习的参数堆叠
HC-Net: Memory-based Incremental Dual-Network System for Continual learning
HTML
PDF
Jangho Kim, Jeesoo Kim, Nojun Kwak
TL;DR
本文提出了一种连续学习方法,通过堆叠参数实现对额外任务的学习并保留先前任务的性能,其中StackNet保证先前学习任务的性能不降低,而索引模块则表现出在找到输入样本的来源时的高置信度。与PackNet相比,该方法竞争力强且高度直观。
Abstract
Training a
neural network
for a
classification
task typically assumes that the data to train are given from the beginning. However, in the real world, additional data accumulate gradually and the model requires a
→