TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/VidChapters-7M: Video Chapters at Scale

VidChapters-7M: Video Chapters at Scale

Antoine Yang, Arsha Nagrani, Ivan Laptev, Josef Sivic, Cordelia Schmid

2023-09-25NeurIPS 2023 11Video ChapteringNavigateVideo CaptioningDense Video Captioning
PaperPDFCode

Abstract

Segmenting long videos into chapters enables users to quickly navigate to the information of their interest. This important topic has been understudied due to the lack of publicly released datasets. To address this issue, we present VidChapters-7M, a dataset of 817K user-chaptered videos including 7M chapters in total. VidChapters-7M is automatically created from videos online in a scalable manner by scraping user-annotated chapters and hence without any additional manual annotation. We introduce the following three tasks based on this data. First, the video chapter generation task consists of temporally segmenting the video and generating a chapter title for each segment. To further dissect the problem, we also define two variants of this task: video chapter generation given ground-truth boundaries, which requires generating a chapter title given an annotated video segment, and video chapter grounding, which requires temporally localizing a chapter given its annotated title. We benchmark both simple baselines and state-of-the-art video-language models for these three tasks. We also show that pretraining on VidChapters-7M transfers well to dense video captioning tasks in both zero-shot and finetuning settings, largely improving the state of the art on the YouCook2 and ViTT benchmarks. Finally, our experiments reveal that downstream performance scales well with the size of the pretraining dataset. Our dataset, code, and models are publicly available at https://antoyang.github.io/vidchapters.html.

Results

TaskDatasetMetricValueModel
Video CaptioningVidChapters-7MCIDEr55.7Vid2Seq
Dense Video CaptioningVidChapters-7MCIDEr55.7Vid2Seq
Video ChapteringVidChapters-7MCIDEr55.7Vid2Seq
Video ChapteringVidChapters-7MP@0.543.1Vid2Seq
Video ChapteringVidChapters-7MP@0.726.4Vid2Seq
Video ChapteringVidChapters-7MP@3s24Vid2Seq
Video ChapteringVidChapters-7MP@5s30.3Vid2Seq
Video ChapteringVidChapters-7MR@0.548.2Vid2Seq
Video ChapteringVidChapters-7MR@0.728.5Vid2Seq
Video ChapteringVidChapters-7MR@3s28.5Vid2Seq
Video ChapteringVidChapters-7MR@5s36.4Vid2Seq
Video ChapteringVidChapters-7MSODA0.114Vid2Seq

Related Papers

Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16CogDDN: A Cognitive Demand-Driven Navigation with Decision Optimization and Dual-Process Thinking2025-07-15UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15Privacy-Preserving Multi-Stage Fall Detection Framework with Semi-supervised Federated Learning and Robotic Vision Confirmation2025-07-14Automating MD simulations for Proteins using Large language Models: NAMD-Agent2025-07-10Graph Learning2025-07-08Visual Hand Gesture Recognition with Deep Learning: A Comprehensive Review of Methods, Datasets, Challenges and Future Research Directions2025-07-06STRUCTSENSE: A Task-Agnostic Agentic Framework for Structured Information Extraction with Human-In-The-Loop Evaluation and Benchmarking2025-07-04