TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MAmmoTH-VL: Eliciting Multimodal Reasoning with Instructio...

MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale

Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, YuBo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, Xiang Yue

2024-12-06Multimodal ReasoningVisual Question Answering (VQA)Visual Question Answering
PaperPDFCode

Abstract

Open-source multimodal large language models (MLLMs) have shown significant potential in a broad range of multimodal tasks. However, their reasoning capabilities remain constrained by existing instruction-tuning datasets, which were predominately repurposed from academic datasets such as VQA, AI2D, and ChartQA. These datasets target simplistic tasks, and only provide phrase-level answers without any intermediate rationales. To address these challenges, we introduce a scalable and cost-effective method to construct a large-scale multimodal instruction-tuning dataset with rich intermediate rationales designed to elicit CoT reasoning. Using only open models, we create a dataset containing 12M instruction-response pairs to cover diverse, reasoning-intensive tasks with detailed and faithful rationales. Experiments demonstrate that training MLLMs on this dataset significantly improves reasoning capabilities, achieving state-of-the-art performance on benchmarks such as MathVerse (+8.1%), MMMU-Pro (+7%), and MuirBench (+13.3%). Additionally, the model demonstrates notable improvements of up to 4% on non-reasoning-based benchmarks. Ablation studies further highlight the importance of key components, such as rewriting and self-filtering, in the dataset construction process.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)MM-VetGPT-4 score62.3MAmmoTH-VL-8B
Visual Question Answering (VQA)MM-VetGPT-4 score60.6MAmmoTH-VL-8B (SI)
Visual Question AnsweringMM-VetGPT-4 score62.3MAmmoTH-VL-8B
Visual Question AnsweringMM-VetGPT-4 score60.6MAmmoTH-VL-8B (SI)

Related Papers

EgoPrune: Efficient Token Pruning for Egomotion Video Reasoning in Embodied Agent2025-07-21Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs2025-07-10MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning2025-07-09Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09