TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multimodal Compact Bilinear Pooling for Visual Question An...

Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, Marcus Rohrbach

2016-06-06EMNLP 2016 11Visual GroundingPhrase GroundingVisual Question Answering (VQA)Visual Question Answering
PaperPDFCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCode

Abstract

Modeling textual or visual information with vector representations trained from large language or visual datasets has been successfully explored in recent years. However, tasks such as visual question answering require combining these vector representations with each other. Approaches to multimodal pooling include element-wise product or sum, as well as concatenation of the visual and textual representations. We hypothesize that these methods are not as expressive as an outer product of the visual and textual vectors. As the outer product is typically infeasible due to its high dimensionality, we instead propose utilizing Multimodal Compact Bilinear pooling (MCB) to efficiently and expressively combine multimodal features. We extensively evaluate MCB on the visual question answering and grounding tasks. We consistently show the benefit of MCB over ablations without MCB. For visual question answering, we present an architecture which uses MCB twice, once for predicting attention over spatial features and again to combine the attended representation with the question representation. This model outperforms the state-of-the-art on the Visual7W dataset and the VQA challenge.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)COCO Visual Question Answering (VQA) real images 1.0 multiple choicePercentage correct70.1MCB 7 att.
Visual Question Answering (VQA)COCO Visual Question Answering (VQA) real images 1.0 open endedPercentage correct66.5MCB 7 att.
Visual Question Answering (VQA)VQA v1 test-devAccuracy64.2MCB (ResNet)
Visual Question Answering (VQA)Visual7WPercentage correct62.2MCB+Att.
Visual Question Answering (VQA)VQA v2 test-devAccuracy64.7MCB
Phrase GroundingReferItAccuracy28.91MCB
Phrase GroundingFlickr30k Entities TestR@148.69MCB

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16ViewSRD: 3D Visual Grounding via Structured Multi-View Decomposition2025-07-15VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation2025-07-09A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding2025-07-09Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09