TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D...

LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models

Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, Chunyuan Li

2024-07-10Zero-Shot Video Question AnswerVideo Question Answering
PaperPDFCodeCodeCode(official)

Abstract

Visual instruction tuning has made considerable strides in enhancing the capabilities of Large Multimodal Models (LMMs). However, existing open LMMs largely focus on single-image tasks, their applications to multi-image scenarios remains less explored. Additionally, prior LMM research separately tackles different scenarios, leaving it impossible to generalize cross scenarios with new emerging capabilities. To this end, we introduce LLaVA-NeXT-Interleave, which simultaneously tackles Multi-image, Multi-frame (video), Multi-view (3D), and Multi-patch (single-image) scenarios in LMMs. To enable these capabilities, we regard the interleaved data format as a general template and compile the M4-Instruct dataset with 1,177.6k samples, spanning 4 primary domains with 14 tasks and 41 datasets. We also curate the LLaVA-Interleave Bench to comprehensively evaluate the multi-image performance of LMMs. Through extensive experiments, LLaVA-NeXT-Interleave achieves leading results in multi-image, video, and 3D benchmarks, while maintaining the performance of single-image tasks. Besides, our model also exhibits several emerging capabilities, e.g., transferring tasks across different settings and modalities. Code is available at https://github.com/LLaVA-VL/LLaVA-NeXT

Results

TaskDatasetMetricValueModel
Question AnsweringVNBenchAccuracy20.1LLaVA-NeXT-Video-7B
Video Question AnsweringNExT-QAAccuracy79.1LLaVA-NeXT-Interleave(14B)
Video Question AnsweringNExT-QAAccuracy78.2LLaVA-NeXT-Interleave(7B)
Video Question AnsweringNExT-QAAccuracy77.9LLaVA-NeXT-Interleave(DPO)
Video Question AnsweringVNBenchAccuracy20.1LLaVA-NeXT-Video-7B

Related Papers

Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and Grounder2025-06-28LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs2025-06-27How Far Can Off-the-Shelf Multimodal Large Language Models Go in Online Episodic Memory Question Answering?2025-06-19video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language Models2025-06-18CogStream: Context-guided Streaming Video Question Answering2025-06-12V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning2025-06-11CausalVQA: A Physically Grounded Causal Reasoning Benchmark for Video Models2025-06-11Looking Beyond Visible Cues: Implicit Video Question Answering via Dual-Clue Reasoning2025-06-09