BriefGPT.xyz
Mar, 2022
重新思考Few-shot视觉语言转换学习的任务抽样
Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language Transfer Learning
HTML
PDF
Zhenhailong Wang, Hang Yu, Manling Li, Han Zhao, Heng Ji
TL;DR
提出了一种新的模型无关多任务微调算法(MAMF)并探究了任务抽样对于有效进行少样本学习的影响,表明 MAMF 在五个少样本视觉语言分类任务上表现比经典的微调更佳,旨在为少样本学习提供新的见解,并鼓励探究更好的任务抽样策略。
Abstract
Despite achieving state-of-the-art zero-shot performance, existing
vision-language models
, e.g., CLIP, still fall short of domain-specific classification tasks, e.g., Fungi Classification. In the context of
few-shot tra
→