TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Visual Question Answering/VQA v2 val

Visual Question Answering on VQA v2 val

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1BLIP-2 ViT-G OPT 6.7B (fine-tuned)82.19NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
2BLIP-2 ViT-G OPT 2.7B (fine-tuned)81.59NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
3BLIP-2 ViT-G FlanT5 XL (fine-tuned)81.55NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
4LocVLM-L55.9NoLearning to Localize Objects Improves Spatial Re...2024-04-11Code