BriefGPT.xyz
Sep, 2023
使用知识蒸馏和自训练提高CLIP的鲁棒性
Improving CLIP Robustness with Knowledge Distillation and Self-Training
HTML
PDF
Clement Laroudie, Andrei Bursuc, Mai Lan Ha, Gianni Franchi
TL;DR
利用LP-CLIP技术通过引入一个线性探测层来提高CLIP的鲁棒性,该技术利用CLIP生成的伪标签以及自训练策略进行训练,无需注释数据,能够增强模型在真实场景中应对多种不确定性和挑战的能力,并在各种数据集上实现了SOTA结果
Abstract
This paper examines the
robustness
of a multi-modal computer vision model,
clip
(Contrastive Language-Image Pretraining), in the context of
unsup
→