TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Visual Question Answering/VQA v2 test-dev

Visual Question Answering on VQA v2 test-dev

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1BLIP-2 ViT-G OPT 6.7B (fine-tuned)82.3NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
2CoCa82.3NoCoCa: Contrastive Captioners are Image-Text Foun...2022-05-04Code
3OFA82NoOFA: Unifying Architectures, Tasks, and Modaliti...2022-02-07Code
4BLIP-2 ViT-G OPT 2.7B (fine-tuned)81.74NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
5BLIP-2 ViT-G FlanT5 XL (fine-tuned)81.66NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
6mPLUG-281.11NomPLUG-2: A Modularized Multi-modal Foundation Mo...2023-02-01Code
7Florence80.16NoFlorence: A New Foundation Model for Computer Vi...2021-11-22Code
8Aurora (ours, r=64)77.69No---
9VK-OOD76.8NoDifferentiable Outlier Detection Enable Robust D...2023-02-11Code
10LXMERT (low-magnitude pruning)70.72NoLXMERT Model Compression for Visual Question Ans...2023-10-23Code
11LocVLM-L56.2NoLearning to Localize Objects Improves Spatial Re...2024-04-11Code