TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Expanding Performance Boundaries of Open-Source Multimodal...

Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling

Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yimin Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo wang, Conghui He, Botian Shi, Xingcheng Zhang, Han Lv, Yi Wang, Wenqi Shao, Pei Chu, Zhongying Tu, Tong He, Zhiyong Wu, Huipeng Deng, Jiaye Ge, Kai Chen, Kaipeng Zhang, LiMin Wang, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, Wenhai Wang

2024-12-06Visual Groundingdocument understandingMultimodal Large Language ModelHallucinationVideo Question AnsweringLarge Language ModelVideo UnderstandingVisual Question Answering (VQA)Language ModellingVisual Question Answering
PaperPDFCode(official)

Abstract

We introduce InternVL 2.5, an advanced multimodal large language model (MLLM) series that builds upon InternVL 2.0, maintaining its core model architecture while introducing significant enhancements in training and testing strategies as well as data quality. In this work, we delve into the relationship between model scaling and performance, systematically exploring the performance trends in vision encoders, language models, dataset sizes, and test-time configurations. Through extensive evaluations on a wide range of benchmarks, including multi-discipline reasoning, document understanding, multi-image / video understanding, real-world comprehension, multimodal hallucination detection, visual grounding, multilingual capabilities, and pure language processing, InternVL 2.5 exhibits competitive performance, rivaling leading commercial models such as GPT-4o and Claude-3.5-Sonnet. Notably, our model is the first open-source MLLMs to surpass 70% on the MMMU benchmark, achieving a 3.7-point improvement through Chain-of-Thought (CoT) reasoning and showcasing strong potential for test-time scaling. We hope this model contributes to the open-source community by setting new standards for developing and applying multimodal AI systems. HuggingFace demo see https://huggingface.co/spaces/OpenGVLab/InternVL

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VLM2-BenchAverage Score on VLM2-bench (9 subtasks)45.59InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchGC-mat30.5InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchGC-trk30.59InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchOC-cnt51.48InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchOC-cpr43.33InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchOC-grp52.5InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchPC-VID21.75InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchPC-cnt59.7InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchPC-cpr59.5InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchPC-grp61InternVL2.5-26B
Visual Question Answering (VQA)VLM2-BenchAverage Score on VLM2-bench (9 subtasks)41.23InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchGC-mat21.24InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchGC-trk26.03InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchOC-cnt55.23InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchOC-cpr53.33InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchOC-grp46.5InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchPC-VID5.25InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchPC-cnt60InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchPC-cpr51.5InternVL2.5-8B
Visual Question Answering (VQA)VLM2-BenchPC-grp52InternVL2.5-8B
Visual Question Answering (VQA)MM-VetGPT-4 score72.3InternVL2.5-78B
Visual Question Answering (VQA)MM-VetGPT-4 score68.8InternVL2.5-38B
Visual Question Answering (VQA)MM-VetGPT-4 score65InternVL2.5-26B
Visual Question Answering (VQA)MM-VetGPT-4 score62.8InternVL2.5-8B
Visual Question Answering (VQA)MM-VetGPT-4 score60.8InternVL2.5-2B
Visual Question Answering (VQA)MM-VetGPT-4 score60.6InternVL2.5-4B
Visual Question Answering (VQA)MM-VetGPT-4 score48.8InternVL2.5-1B
Video Question AnsweringOVBenchAVG48.7InternVL2 (7B)
Video Question AnsweringOVBenchAVG44.1InternVL2 (4B)
Video Question AnsweringNExT-QAAccuracy85.5InternVL-2.5(8B)
Visual Question AnsweringMM-VetGPT-4 score72.3InternVL2.5-78B
Visual Question AnsweringMM-VetGPT-4 score68.8InternVL2.5-38B
Visual Question AnsweringMM-VetGPT-4 score65InternVL2.5-26B
Visual Question AnsweringMM-VetGPT-4 score62.8InternVL2.5-8B
Visual Question AnsweringMM-VetGPT-4 score60.8InternVL2.5-2B
Visual Question AnsweringMM-VetGPT-4 score60.6InternVL2.5-4B
Visual Question AnsweringMM-VetGPT-4 score48.8InternVL2.5-1B

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17