TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Visual Question Answering (VQA)/InfoSeek

Visual Question Answering (VQA) on InfoSeek

Metric: Accuracy (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Accuracy▼Extra DataPaperDate↕Code
1RA-VQAv2 w/ PreFLMR30.65NoPreFLMR: Scaling Up Fine-Grained Late-Interactio...2024-02-13Code
2PaLI-X24NoPaLI-X: On Scaling up a Multilingual Vision and ...2023-05-29Code
3CLIP + FiD20.9NoCan Pre-trained Vision and Language Models Answe...2023-02-23Code
4CLIP + PaLM (540B)20.4NoCan Pre-trained Vision and Language Models Answe...2023-02-23Code
5PaLI19.7NoCan Pre-trained Vision and Language Models Answe...2023-02-23Code
6BLIP214.6NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
7InstructBLIP14.5No---