TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Vid2Seq: Large-Scale Pretraining of a Visual Language Mode...

Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning

Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid

2023-02-27CVPR 2023 1Video CaptioningDense Video CaptioningLanguage Modelling
PaperPDFCode(official)CodeCode

Abstract

In this work, we introduce Vid2Seq, a multi-modal single-stage dense event captioning model pretrained on narrated videos which are readily-available at scale. The Vid2Seq architecture augments a language model with special time tokens, allowing it to seamlessly predict event boundaries and textual descriptions in the same output sequence. Such a unified model requires large-scale training data, which is not available in current annotated datasets. We show that it is possible to leverage unlabeled narrated videos for dense video captioning, by reformulating sentence boundaries of transcribed speech as pseudo event boundaries, and using the transcribed speech sentences as pseudo event captions. The resulting Vid2Seq model pretrained on the YT-Temporal-1B dataset improves the state of the art on a variety of dense video captioning benchmarks including YouCook2, ViTT and ActivityNet Captions. Vid2Seq also generalizes well to the tasks of video paragraph captioning and video clip captioning, and to few-shot settings. Our code is publicly available at https://antoyang.github.io/vid2seq.html.

Results

TaskDatasetMetricValueModel
Video CaptioningMSR-VTTCIDEr64.6Vid2Seq
Video CaptioningMSR-VTTMETEOR30.8Vid2Seq
Video CaptioningMSVDCIDEr146.2Vid2Seq
Video CaptioningMSVDMETEOR45.3Vid2Seq
Video CaptioningYouCook2CIDEr47.1Vid2Seq
Video CaptioningYouCook2METEOR9.3Vid2Seq
Video CaptioningYouCook2SODA7.9Vid2Seq
Video CaptioningViTTCIDEr43.5Vid2Seq
Video CaptioningViTTMETEOR8.5Vid2Seq
Video CaptioningViTTSODA0.135Vid2Seq
Video CaptioningActivityNet CaptionsCIDEr28Vid2Seq
Video CaptioningActivityNet CaptionsMETEOR17Vid2Seq
Dense Video CaptioningYouCook2CIDEr47.1Vid2Seq
Dense Video CaptioningYouCook2METEOR9.3Vid2Seq
Dense Video CaptioningYouCook2SODA7.9Vid2Seq
Dense Video CaptioningViTTCIDEr43.5Vid2Seq
Dense Video CaptioningViTTMETEOR8.5Vid2Seq
Dense Video CaptioningViTTSODA0.135Vid2Seq
Dense Video CaptioningActivityNet CaptionsCIDEr28Vid2Seq
Dense Video CaptioningActivityNet CaptionsMETEOR17Vid2Seq

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16