BriefGPT.xyz
Jul, 2023
利用合成提示来提升CLIP的零样本泛化能力
Improving Zero-Shot Generalization for CLIP with Synthesized Prompts
HTML
PDF
Zhengbo Wang, Jian Liang, Ran He, Nan Xu, Zilei Wang...
TL;DR
本文提出了一种基于生成式方法的模型适配方案(SHIP),使用文本和图像信息进行训练的预训练模型(CLIP)可以在没有标签的类别上表现出更好的效果。在对基础数据集到新的数据集的泛化、跨数据集的迁移学习和广义的零样本学习等方面进行了广泛实验,证明了该方法的优越性
Abstract
With the growing interest in
pretrained vision-language models
like
clip
, recent research has focused on adapting these models to downstream tasks. Despite achieving promising results, most existing methods requi
→