TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DocVQA: A Dataset for VQA on Document Images

DocVQA: A Dataset for VQA on Document Images

Minesh Mathew, Dimosthenis Karatzas, C. V. Jawahar

2020-07-01Reading ComprehensionQuestion AnsweringVisual Question Answering (VQA)Visual Question Answering
PaperPDFCodeCodeCode

Abstract

We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa.org

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)DocVQA valAccuracy54.48BERT LARGE Baseline
Visual Question Answering (VQA)DocVQA valbk lôn0.655đm bk
Visual Question Answering (VQA)DocVQA testANLS0.9436Human
Visual Question Answering (VQA)DocVQA testANLS0.665BERT_LARGE_SQUAD_DOCVQA_FINETUNED_Baseline
Visual Question Answering (VQA)DocVQA testAccuracy55.77BERT_LARGE_SQUAD_DOCVQA_FINETUNED_Baseline

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16