BriefGPT.xyz
Feb, 2023
持续学习自然语言新任务中预防灾难性遗忘
Preventing Catastrophic Forgetting in Continual Learning of New Natural Language Tasks
HTML
PDF
Sudipta Kar, Giuseppe Castellucci, Simone Filice, Shervin Malmasi, Oleg Rokhlenko
TL;DR
该论文提出了一种基于知识蒸馏的增量学习方法,利用未标记数据避免灾难性遗忘,该方法在公开基准测试中表现出显著的效果,能够保存已获取的知识,并对增量添加的任务获得良好的性能。
Abstract
multi-task learning
(MTL) is widely-accepted in
natural language processing
as a standard technique for learning multiple related tasks in one model. Training an MTL model requires having the training data for al
→