TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Glance and Focus: Memory Prompting for Multi-Event Video Q...

Glance and Focus: Memory Prompting for Multi-Event Video Question Answering

Ziyi Bai, Ruiping Wang, Xilin Chen

2024-01-03NeurIPS 2023 11Action DetectionQuestion AnsweringHuman-Object Interaction DetectionVideo Question Answering
PaperPDFCode(official)

Abstract

Video Question Answering (VideoQA) has emerged as a vital tool to evaluate agents' ability to understand human daily behaviors. Despite the recent success of large vision language models in many multi-modal tasks, complex situation reasoning over videos involving multiple human-object interaction events still remains challenging. In contrast, humans can easily tackle it by using a series of episode memories as anchors to quickly locate question-related key moments for reasoning. To mimic this effective reasoning strategy, we propose the Glance-Focus model. One simple way is to apply an action detection model to predict a set of actions as key memories. However, these actions within a closed set vocabulary are hard to generalize to various video domains. Instead of that, we train an Encoder-Decoder to generate a set of dynamic event memories at the glancing stage. Apart from using supervised bipartite matching to obtain the event memories, we further design an unsupervised memory generation method to get rid of dependence on event annotations. Next, at the focusing stage, these event memories act as a bridge to establish the correlation between the questions with high-level event concepts and low-level lengthy video content. Given the question, the model first focuses on the generated key event memory, then focuses on the most relevant moment for reasoning through our designed multi-level cross-attention mechanism. We conduct extensive experiments on four Multi-Event VideoQA benchmarks including STAR, EgoTaskQA, AGQA, and NExT-QA. Our proposed model achieves state-of-the-art results, surpassing current large models in various challenging reasoning tasks. The code and models are available at https://github.com/ByZ0e/Glance-Focus.

Results

TaskDatasetMetricValueModel
Question AnsweringEgoTaskQADirect44.27GF(sup)
Question AnsweringEgoTaskQADirect43.06GF(uns)
Video Question AnsweringAGQA 2.0 balancedAverage Accuracy55.08GF (sup) - Faster RCNN
Video Question AnsweringAGQA 2.0 balancedAverage Accuracy53.33GF (uns) - S3D
Video Question AnsweringAGQA 2.0 balancedAverage Accuracy48.59AIO - ViT
Video Question AnsweringSTAR BenchmarkAverage Accuracy53.94GF(sup)
Video Question AnsweringSTAR BenchmarkAverage Accuracy53.86GF(uns)
Video Question AnsweringNExT-QAAccuracy58.83GF

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Warehouse Spatial Question Answering with LLM Agent2025-07-14RoHOI: Robustness Benchmark for Human-Object Interaction Detection2025-07-12