TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Open-ended VQA benchmarking of Vision-Language models by e...

Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy

Simon Ging, María A. Bravo, Thomas Brox

2024-02-11Open Vocabulary Attribute DetectionVisual Question Answering (VQA)Visual Question Answering
PaperPDFCode(official)

Abstract

The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models' capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)ActivityNetClipMatch@153.39BLIP-2 T5
Visual Question Answering (VQA)ActivityNetClipMatch@574.71BLIP-2 T5
Visual Question Answering (VQA)ActivityNetContains15.7BLIP-2 T5
Visual Question Answering (VQA)ActivityNetExactMatch7.07BLIP-2 T5
Visual Question Answering (VQA)ActivityNetFollow-up ClipMatch@162.02BLIP-2 T5
Visual Question Answering (VQA)ActivityNetFollow-up ClipMatch@575.13BLIP-2 T5
Visual Question Answering (VQA)ActivityNetFollow-up Contains18.09BLIP-2 T5
Visual Question Answering (VQA)ActivityNetFollow-up ExactMatch8.84BLIP-2 T5
Visual Question Answering (VQA)COCOClipMatch@159.58InstructBLIP Vicuna
Visual Question Answering (VQA)COCOClipMatch@573.32InstructBLIP Vicuna
Visual Question Answering (VQA)COCOContains27.52InstructBLIP Vicuna
Visual Question Answering (VQA)COCOExactMatch26.5InstructBLIP Vicuna
Visual Question Answering (VQA)OVAD benchmarkContains w. Synonyms45.7BLIP
Visual Question Answering (VQA)OVAD benchmarkExactMatch w. Synonyms36.99BLIP
Visual Question Answering (VQA)ImageNetClipMatch@157.1BLIP-2 OPT
Visual Question Answering (VQA)ImageNetClipMatch@577.24BLIP-2 OPT
Visual Question Answering (VQA)ImageNetContains35.49BLIP-2 OPT
Visual Question Answering (VQA)ImageNetExactMatch0.87BLIP-2 OPT
Visual Question Answering (VQA)ImageNetFollow-up ClipMatch@167.22BLIP-2 OPT
Visual Question Answering (VQA)ImageNetFollow-up ClipMatch@583.54BLIP-2 OPT
Visual Question Answering (VQA)ImageNetFollow-up Contains40.31BLIP-2 OPT
Visual Question Answering (VQA)ImageNetFollow-up ExactMatch2.54BLIP-2 OPT

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights2025-07-09MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning2025-07-09Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling2025-07-08