BriefGPT.xyz
Apr, 2024
预训练视觉与语言变形器是少样本增量学习者
Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners
HTML
PDF
Keon-Hee Park, Kyungwoo Song, Gyeong-Moon Park
TL;DR
本文介绍了一种名为PriViLege的新型FSCIL框架,通过预训练的视觉和语言转换模型以及提示函数和知识蒸馏,有效地解决了FSCIL中的遗忘和过拟合问题,并获得了明显优于现有方法的结果。
Abstract
few-shot class incremental learning
(FSCIL) is a task that requires a model to learn new classes incrementally without forgetting when only a few samples for each class are given. FSCIL encounters two significant challenges:
→