BriefGPT.xyz
May, 2024
对敌对提示调整的鲁棒泛化问题的重新审视
Revisiting the Robust Generalization of Adversarial Prompt Tuning
HTML
PDF
Fan Yang, Mingxuan Xia, Sangzhou Xia, Chicheng Ma, Hui Hui
TL;DR
通过多模态提示学习来提高图像和文本特征的对齐度,利用预训练的 CLIP 强大的泛化能力,引导模型在对抗性示例上增强鲁棒泛化能力,同时在干净示例上保持准确性。
Abstract
Understanding the vulnerability of
large-scale pre-trained vision-language models
like CLIP against
adversarial attacks
is key to ensuring zero-shot generalization capacity on various downstream tasks. State-of-t
→