TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Too Many Frames, Not All Useful: Efficient Strategies for ...

Too Many Frames, Not All Useful: Efficient Strategies for Long-Form Video QA

Jongwoo Park, Kanchana Ranasinghe, Kumara Kahatapitiya, Wonjeong Ryoo, Donghyun Kim, Michael S. Ryoo

2024-06-13Zero-Shot Video Question AnswerQuestion AnsweringFormVideo Question AnsweringAll
PaperPDFCode(official)

Abstract

Long-form videos that span across wide temporal intervals are highly information redundant and contain multiple distinct events or entities that are often loosely related. Therefore, when performing long-form video question answering (LVQA), all information necessary to generate a correct response can often be contained within a small subset of frames. Recent literature explore the use of large language models (LLMs) in LVQA benchmarks, achieving exceptional performance, while relying on vision language models (VLMs) to convert all visual content within videos into natural language. Such VLMs often independently caption a large number of frames uniformly sampled from long videos, which is not efficient and can mostly be redundant. Questioning these decision choices, we explore optimal strategies for key-frame selection that can significantly reduce these redundancies, namely Hierarchical Keyframe Selector. Our proposed framework, LVNet, achieves state-of-the-art performance at a comparable caption scale across three benchmark LVQA datasets: EgoSchema, NExT-QA, IntentQA. The code can be found at https://github.com/jongwoopark7978/LVNet

Results

TaskDatasetMetricValueModel
Question AnsweringNExT-QAAccuracy72.9LVNet(GPT-4o)
Question AnsweringIntentQAAccuracy71.1LVNet
Question AnsweringEgoSchema (fullset)Accuracy61.1LVNet
Question AnsweringEgoSchema (subset)Accuracy66LVNet
Video Question AnsweringNExT-QAAccuracy72.9LVNet(GPT-4o)
Video Question AnsweringIntentQAAccuracy71.1LVNet
Video Question AnsweringEgoSchema (fullset)Accuracy61.1LVNet
Video Question AnsweringEgoSchema (subset)Accuracy66LVNet

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Modeling Code: Is Text All You Need?2025-07-15All Eyes, no IMU: Learning Flight Attitude from Vision Alone2025-07-15