TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BIMBA: Selective-Scan Compression for Long-Range Video Que...

BIMBA: Selective-Scan Compression for Long-Range Video Question Answering

Md Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang, Gedas Bertasius, Lorenzo Torresani

2025-03-12CVPR 2025 1Zero-Shot Video Question AnswerVideo Question Answering
PaperPDFCode(official)

Abstract

Video Question Answering (VQA) in long videos poses the key challenge of extracting relevant information and modeling long-range dependencies from many redundant frames. The self-attention mechanism provides a general solution for sequence modeling, but it has a prohibitive cost when applied to a massive number of spatiotemporal tokens in long videos. Most prior methods rely on compression strategies to lower the computational cost, such as reducing the input length via sparse frame sampling or compressing the output sequence passed to the large language model (LLM) via space-time pooling. However, these naive approaches over-represent redundant information and often miss salient events or fast-occurring space-time patterns. In this work, we introduce BIMBA, an efficient state-space model to handle long-form videos. Our model leverages the selective scan algorithm to learn to effectively select critical information from high-dimensional video and transform it into a reduced token sequence for efficient LLM processing. Extensive experiments demonstrate that BIMBA achieves state-of-the-art accuracy on multiple long-form VQA benchmarks, including PerceptionTest, NExT-QA, EgoSchema, VNBench, LongVideoBench, and Video-MME. Code, and models are publicly available at https://sites.google.com/view/bimba-mllm.

Results

TaskDatasetMetricValueModel
Question AnsweringVideo-MMEAccuracy (%)64.67BIMBA-LLaVA-Qwen2-7B
Question AnsweringVNBenchAccuracy77.88BIMBA-LLaVA-Qwen2-7B
Question AnsweringEgoSchema (fullset)Accuracy71.14BIMBA-LLaVA-Qwen2-7B
Video Question AnsweringNExT-QAAccuracy83.73BIMBA-LLaVA-Qwen2-7B
Video Question AnsweringPerception TestAccuracy (Top-1)68.51BIMBA-LLaVA-Qwen2-7B
Video Question AnsweringVideo-MMEAccuracy (%)64.67BIMBA-LLaVA-Qwen2-7B
Video Question AnsweringVNBenchAccuracy77.88BIMBA-LLaVA-Qwen2-7B
Video Question AnsweringEgoSchema (fullset)Accuracy71.14BIMBA-LLaVA-Qwen2-7B

Related Papers

Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and Grounder2025-06-28LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs2025-06-27How Far Can Off-the-Shelf Multimodal Large Language Models Go in Online Episodic Memory Question Answering?2025-06-19video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language Models2025-06-18CogStream: Context-guided Streaming Video Question Answering2025-06-12V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning2025-06-11CausalVQA: A Physically Grounded Causal Reasoning Benchmark for Video Models2025-06-11Looking Beyond Visible Cues: Implicit Video Question Answering via Dual-Clue Reasoning2025-06-09