Apr, 2024
Q-GroundCAM: 通过GradCAM度量视觉语言模型中的基准化能力
Q-GroundCAM: Quantifying Grounding in Vision Language Models via GradCAM
TL;DRVision and Language Models (VLMs) have remarkable zero-shot performance, but struggle with compositional scene understanding and linguistic phrase grounding. This paper introduces novel quantitative metrics using GradCAM activations to evaluate pre-trained VLMs' grounding capabilities and measure their uncertainty, revealing tradeoffs between model size, dataset size, and performance.