TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ViLBERT: Pretraining Task-Agnostic Visiolinguistic Represe...

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee

2019-08-06NeurIPS 2019 12Question AnsweringVisual GroundingReferring Expression ComprehensionVisual ReasoningRetrievalVisual Question Answering (VQA)Visual Commonsense ReasoningVisual Question AnsweringImage Retrieval
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)A-OKVQADA VQA Score12ViLBERT - VQA
Visual Question Answering (VQA)A-OKVQAMC Accuracy42.1ViLBERT - VQA
Visual Question Answering (VQA)A-OKVQADA VQA Score25.9ViLBERT
Visual Question Answering (VQA)A-OKVQAMC Accuracy41.5ViLBERT
Visual Question Answering (VQA)A-OKVQADA VQA Score9.2ViLBERT - OK-VQA
Visual Question Answering (VQA)A-OKVQAMC Accuracy34.1ViLBERT - OK-VQA
Visual Question Answering (VQA)VQA v2 test-devAccuracy70.55ViLBERT

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17