TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Long Context Transfer from Language to Vision

Long Context Transfer from Language to Vision

Peiyuan Zhang, Kaichen Zhang, Bo Li, Guangtao Zeng, Jingkang Yang, Yuanhan Zhang, Ziyue Wang, Haoran Tan, Chunyuan Li, Ziwei Liu

2024-06-24Zero-Shot Video Question AnswerVideo Question AnsweringVisual Question Answering (VQA)Language Modelling
PaperPDFCodeCode(official)

Abstract

Video sequences offer valuable temporal information, but existing large multimodal models (LMMs) fall short in understanding extremely long videos. Many works address this by reducing the number of visual tokens using visual resamplers. Alternatively, in this paper, we approach this problem from the perspective of the language model. By simply extrapolating the context length of the language backbone, we enable LMMs to comprehend orders of magnitude more visual tokens without any video training. We call this phenomenon long context transfer and carefully ablate its properties. To effectively measure LMMs' ability to generalize to long contexts in the vision modality, we develop V-NIAH (Visual Needle-In-A-Haystack), a purely synthetic long vision benchmark inspired by the language model's NIAH test. Our proposed Long Video Assistant (LongVA) can process 2000 frames or over 200K visual tokens without additional complexities. With its extended context length, LongVA achieves state-of-the-art performance on Video-MME among 7B-scale models by densely sampling more input frames. Our work is open-sourced at https://github.com/EvolvingLMMs-Lab/LongVA.

Results

TaskDatasetMetricValueModel
Question AnsweringNExT-QAAccuracy67.1LongVA(32 frames)
Visual Question Answering (VQA)VLM2-BenchAverage Score on VLM2-bench (9 subtasks)22.59LongVA-7B
Visual Question Answering (VQA)VLM2-BenchGC-mat14.29LongVA-7B
Visual Question Answering (VQA)VLM2-BenchGC-trk19.18LongVA-7B
Visual Question Answering (VQA)VLM2-BenchOC-cnt42.53LongVA-7B
Visual Question Answering (VQA)VLM2-BenchOC-cpr26.67LongVA-7B
Visual Question Answering (VQA)VLM2-BenchOC-grp18.5LongVA-7B
Visual Question Answering (VQA)VLM2-BenchPC-VID3.75LongVA-7B
Visual Question Answering (VQA)VLM2-BenchPC-cnt38.9LongVA-7B
Visual Question Answering (VQA)VLM2-BenchPC-cpr21.5LongVA-7B
Visual Question Answering (VQA)VLM2-BenchPC-grp18LongVA-7B
Video Question AnsweringOVBenchAVG43.6LongVA (7B)
Video Question AnsweringNExT-QAAccuracy67.1LongVA(32 frames)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Assay2Mol: large language model-based drug design using BioAssay context2025-07-16