TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LLaVA-Mini: Efficient Image and Video Large Multimodal Mod...

LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token

Shaolei Zhang, Qingkai Fang, Zhe Yang, Yang Feng

2025-01-07Zero-Shot Video Question AnswerVisual Question Answering (VQA)
PaperPDFCode(official)

Abstract

The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of large language models (LLMs), where large-scale parameters and numerous context tokens (predominantly vision tokens) result in substantial computational overhead. Previous efforts towards efficient LMMs always focus on replacing the LLM backbone with smaller models, while neglecting the crucial issue of token quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal vision tokens. To achieve a high compression ratio of vision tokens while preserving visual information, we first analyze how LMMs understand vision tokens and find that most vision tokens only play a crucial role in the early layers of LLM backbone, where they mainly fuse visual information into text tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to fuse visual information into text tokens in advance, thereby facilitating the extreme compression of vision tokens fed to LLM backbone into one token. LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner. Experiments across 11 image-based and 7 video-based benchmarks demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by 77%, deliver low-latency responses within 40 milliseconds, and process over 10,000 frames of video on the GPU hardware with 24GB of memory.

Results

TaskDatasetMetricValueModel
Question AnsweringMSVD-QAAccuracy70.9LLaVA-Mini
Question AnsweringMSVD-QAConfidence Score4LLaVA-Mini
Question AnsweringMSRVTT-QAAccuracy59.5LLaVA-Mini
Question AnsweringMSRVTT-QAConfidence Score3.6LLaVA-Mini
Question AnsweringActivityNet-QAAccuracy53.5LLaVA-Mini
Question AnsweringActivityNet-QAConfidence Score3.5LLaVA-Mini
Video Question AnsweringMSVD-QAAccuracy70.9LLaVA-Mini
Video Question AnsweringMSVD-QAConfidence Score4LLaVA-Mini
Video Question AnsweringMSRVTT-QAAccuracy59.5LLaVA-Mini
Video Question AnsweringMSRVTT-QAConfidence Score3.6LLaVA-Mini
Video Question AnsweringActivityNet-QAAccuracy53.5LLaVA-Mini
Video Question AnsweringActivityNet-QAConfidence Score3.5LLaVA-Mini

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and Grounder2025-06-28Bridging Video Quality Scoring and Justification via Large Multimodal Models2025-06-26DrishtiKon: Multi-Granular Visual Grounding for Text-Rich Document Images2025-06-26