TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Visual Question Answering (VQA)/VLM2-Bench

Visual Question Answering (VQA) on VLM2-Bench

Metric: Average Score on VLM2-bench (9 subtasks) (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Average Score on VLM2-bench (9 subtasks)▼Extra DataPaperDate↕Code
1GPT-4o60.36NoGPT-4o System Card2024-10-25-
2Qwen2.5-VL-7B54.82NoQwen2.5-VL Technical Report2025-02-19Code
3InternVL2.5-26B45.59NoExpanding Performance Boundaries of Open-Source ...2024-12-06Code
4LLaVA-Video-7B43.32NoVideo Instruction Tuning With Synthetic Data2024-10-03-
5Qwen2-VL-7B42.37NoQwen2-VL: Enhancing Vision-Language Model's Perc...2024-09-18Code
6InternVL2.5-8B41.23NoExpanding Performance Boundaries of Open-Source ...2024-12-06Code
7LLaVA-OneVision-7B39.35NoLLaVA-OneVision: Easy Visual Task Transfer2024-08-06Code
8mPLUG-Owl3-7B37.85NomPLUG-Owl3: Towards Long Image-Sequence Understa...2024-08-09Code
9LongVA-7B22.59NoLong Context Transfer from Language to Vision2024-06-24Code