TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VisionZip: Longer is Better but Not Necessary in Vision La...

VisionZip: Longer is Better but Not Necessary in Vision Language Models

Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, Jiaya Jia

2024-12-05CVPR 2025 1Video UnderstandingVisual Question Answering
PaperPDFCode(official)

Abstract

Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)MM-VetGPT-4 score32.9VisionZip (Retain 128 Tokens, fine-tuning)
Visual Question Answering (VQA)MM-VetGPT-4 score32.6VisionZip (Retain 192 Tokens, fine-tuning)
Visual Question Answering (VQA)MM-VetGPT-4 score32.6VisionZip (Retain 128 Tokens)
Visual Question Answering (VQA)MM-VetGPT-4 score31.7VisionZip (Retain 192 Tokens)
Visual Question Answering (VQA)MM-VetGPT-4 score31.7VisionZip (Retain 64 Tokens)
Visual Question Answering (VQA)MM-VetGPT-4 score30.2VisionZip (Retain 64 Tokens, fine-tuning)
Visual Question AnsweringMM-VetGPT-4 score32.9VisionZip (Retain 128 Tokens, fine-tuning)
Visual Question AnsweringMM-VetGPT-4 score32.6VisionZip (Retain 192 Tokens, fine-tuning)
Visual Question AnsweringMM-VetGPT-4 score32.6VisionZip (Retain 128 Tokens)
Visual Question AnsweringMM-VetGPT-4 score31.7VisionZip (Retain 192 Tokens)
Visual Question AnsweringMM-VetGPT-4 score31.7VisionZip (Retain 64 Tokens)
Visual Question AnsweringMM-VetGPT-4 score30.2VisionZip (Retain 64 Tokens, fine-tuning)

Related Papers

VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15EmbRACE-3K: Embodied Reasoning and Action in Complex Environments2025-07-14Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights2025-07-09