TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VisualBERT: A Simple and Performant Baseline for Vision an...

VisualBERT: A Simple and Performant Baseline for Vision and Language

Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang

2019-08-09Visual ReasoningVisual Question Answering (VQA)Language Modelling
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VCR (Q-AR) testAccuracy52.4VisualBERT
Visual Question Answering (VQA)VCR (Q-AR) devAccuracy52.2VisualBERT
Visual Question Answering (VQA)VCR (Q-A) devAccuracy70.8VisualBERT
Visual Question Answering (VQA)VCR (QA-R) devAccuracy73.2VisualBERT
Visual Question Answering (VQA)VCR (QA-R) testAccuracy73.2VisualBERT
Visual Question Answering (VQA)VCR (Q-A) testAccuracy71.6VisualBERT
Visual Question Answering (VQA)VQA v2 test-devAccuracy70.8VisualBERT
Visual Question Answering (VQA)VQA v2 test-stdoverall71VisualBERT
Visual ReasoningNLVR2 DevAccuracy66.7VisualBERT
Phrase GroundingFlickr30k Entities DevR@170.4VisualBERT
Phrase GroundingFlickr30k Entities DevR@1086.31VisualBERT
Phrase GroundingFlickr30k Entities DevR@584.49VisualBERT
Phrase GroundingFlickr30k Entities TestR@171.33VisualBERT
Phrase GroundingFlickr30k Entities TestR@1086.51VisualBERT
Phrase GroundingFlickr30k Entities TestR@584.98VisualBERT

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16