TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TVQA: Localized, Compositional Video Question Answering

TVQA: Localized, Compositional Video Question Answering

Jie Lei, Licheng Yu, Mohit Bansal, Tamara L. Berg

2018-09-05EMNLP 2018 10Video Question Answering
PaperPDFCodeCodeCodeCode

Abstract

Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a large-scale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.

Results

TaskDatasetMetricValueModel
Video Question AnsweringSUTD-TrafficQA1/263.15TVQA
Video Question AnsweringSUTD-TrafficQA1/435.16TVQA

Related Papers

Decoupled Seg Tokens Make Stronger Reasoning Video Segmenter and Grounder2025-06-28LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs2025-06-27How Far Can Off-the-Shelf Multimodal Large Language Models Go in Online Episodic Memory Question Answering?2025-06-19video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language Models2025-06-18CogStream: Context-guided Streaming Video Question Answering2025-06-12V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning2025-06-11CausalVQA: A Physically Grounded Causal Reasoning Benchmark for Video Models2025-06-11Looking Beyond Visible Cues: Implicit Video Question Answering via Dual-Clue Reasoning2025-06-09