BriefGPT.xyz
Jul, 2024
通过无监督知识蒸馏提高学得提示的零样本推理能力
Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
HTML
PDF
Marco Mistretta, Alberto Baldrati, Marco Bertini, Andrew D. Bagdanov
TL;DR
这篇论文介绍了一种名为知识蒸馏提示学习的方法,通过无监督知识蒸馏从更强大的模型中提取知识,以提高基于提示学习技术的视觉-语言模型在零样本领域泛化、跨数据集泛化以及基于新类的零样本泛化问题上的推广能力。
Abstract
vision-language models
(VLMs) demonstrate remarkable
zero-shot generalization
to unseen tasks, but fall short of the performance of supervised methods in generalizing to downstream tasks with limited data.
→