TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Beyond Question-Based Biases: Assessing Multimodal Shortcu...

Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering

Corentin Dancette, Remi Cadene, Damien Teney, Matthieu Cord

2021-04-07ICCV 2021 10Question AnsweringVisual Question Answering (VQA)Visual Question Answering
PaperPDFCode(official)

Abstract

We introduce an evaluation methodology for visual question answering (VQA) to better diagnose cases of shortcut learning. These cases happen when a model exploits spurious statistical regularities to produce correct answers but does not actually deploy the desired behavior. There is a need to identify possible shortcuts in a dataset and assess their use before deploying a model in the real world. The research community in VQA has focused exclusively on question-based shortcuts, where a model might, for example, answer "What is the color of the sky" with "blue" by relying mostly on the question-conditional training prior and give little weight to visual evidence. We go a step further and consider multimodal shortcuts that involve both questions and images. We first identify potential shortcuts in the popular VQA v2 training set by mining trivial predictive rules such as co-occurrences of words and visual elements. We then introduce VQA-CounterExamples (VQA-CE), an evaluation protocol based on our subset of CounterExamples i.e. image-question-answer triplets where our rules lead to incorrect answers. We use this new evaluation in a large-scale study of existing approaches for VQA. We demonstrate that even state-of-the-art models perform poorly and that existing techniques to reduce biases are largely ineffective in this context. Our findings suggest that past work on question-based biases in VQA has only addressed one facet of a complex issue. The code for our method is available at https://github.com/cdancette/detect-shortcuts.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)34.41RandImg
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)34.36LMH + CSS
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)34.27LFF
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)34.26LMH
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)33.91UpDown
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)33.26ESR
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)33.14LMH + RMFE
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)32.91BLOCK
Visual Question Answering (VQA)VQA-CEAccuracy (Counterexamples)32.25RUBi

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16