TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Hierarchical Question-Image Co-Attention for Visual Questi...

Hierarchical Question-Image Co-Attention for Visual Question Answering

Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh

2016-05-31NeurIPS 2016 12Visual DialogVisual Question Answering (VQA)Visual Question Answering
PaperPDFCodeCodeCodeCode(official)CodeCodeCodeCodeCode

Abstract

A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.

Results

TaskDatasetMetricValueModel
DialogueVisDial v0.9 valMRR57.88HieCoAtt-QI
DialogueVisDial v0.9 valMean Rank5.84HieCoAtt-QI
DialogueVisDial v0.9 valR@143.51HieCoAtt-QI
DialogueVisDial v0.9 valR@1083.96HieCoAtt-QI
DialogueVisDial v0.9 valR@574.49HieCoAtt-QI
Visual Question Answering (VQA)VQA v1 test-stdAccuracy62.1HieCoAtt (ResNet)
Visual Question Answering (VQA)COCO Visual Question Answering (VQA) real images 1.0 multiple choicePercentage correct66.1HQI+ResNet
Visual Question Answering (VQA)COCO Visual Question Answering (VQA) real images 1.0 open endedPercentage correct62.1HQI+ResNet
Visual Question Answering (VQA)VQA v1 test-devAccuracy61.8HieCoAtt (ResNet)
Visual DialogVisDial v0.9 valMRR57.88HieCoAtt-QI
Visual DialogVisDial v0.9 valMean Rank5.84HieCoAtt-QI
Visual DialogVisDial v0.9 valR@143.51HieCoAtt-QI
Visual DialogVisDial v0.9 valR@1083.96HieCoAtt-QI
Visual DialogVisDial v0.9 valR@574.49HieCoAtt-QI

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights2025-07-09MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning2025-07-09Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling2025-07-08