TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Accurate and Fast Compressed Video Captioning

Accurate and Fast Compressed Video Captioning

Yaojie Shen, Xin Gu, Kai Xu, Heng Fan, Longyin Wen, Libo Zhang

2023-09-22ICCV 2023 1Video Captioning
PaperPDFCode(official)

Abstract

Existing video captioning approaches typically require to first sample video frames from a decoded video and then conduct a subsequent process (e.g., feature extraction and/or captioning model learning). In this pipeline, manual frame sampling may ignore key information in videos and thus degrade performance. Additionally, redundant information in the sampled frames may result in low efficiency in the inference of video captioning. Addressing this, we study video captioning from a different perspective in compressed domain, which brings multi-fold advantages over the existing pipeline: 1) Compared to raw images from the decoded video, the compressed video, consisting of I-frames, motion vectors and residuals, is highly distinguishable, which allows us to leverage the entire video for learning without manual sampling through a specialized model design; 2) The captioning model is more efficient in inference as smaller and less redundant information is processed. We propose a simple yet effective end-to-end transformer in the compressed domain for video captioning that enables learning from the compressed video for captioning. We show that even with a simple design, our method can achieve state-of-the-art performance on different benchmarks while running almost 2x faster than existing approaches. Code is available at https://github.com/acherstyx/CoCap.

Results

TaskDatasetMetricValueModel
Video CaptioningMSR-VTTBLEU-444.4CoCap (ViT/L14)
Video CaptioningMSR-VTTCIDEr57.2CoCap (ViT/L14)
Video CaptioningMSR-VTTMETEOR30.3CoCap (ViT/L14)
Video CaptioningMSR-VTTROUGE-L63.4CoCap (ViT/L14)
Video CaptioningVATEXBLEU-435.8CoCap (ViT/L14)
Video CaptioningVATEXCIDEr64.8CoCap (ViT/L14)
Video CaptioningVATEXMETEOR25.3CoCap (ViT/L14)
Video CaptioningVATEXROUGE-L52CoCap (ViT/L14)
Video CaptioningMSVDBLEU-460.1CoCap (ViT/L14)
Video CaptioningMSVDCIDEr121.5CoCap (ViT/L14)
Video CaptioningMSVDMETEOR41.4CoCap (ViT/L14)
Video CaptioningMSVDROUGE-L78.2CoCap (ViT/L14)

Related Papers

UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15Show, Tell and Summarize: Dense Video Captioning Using Visual Cue Aided Sentence Summarization2025-06-25Dense Video Captioning using Graph-based Sentence Summarization2025-06-25video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language Models2025-06-18VersaVid-R1: A Versatile Video Understanding and Reasoning Model from Question Answering to Captioning Tasks2025-06-10ARGUS: Hallucination and Omission Evaluation in Video-LLMs2025-06-09Temporal Object Captioning for Street Scene Videos from LiDAR Tracks2025-05-22FLASH: Latent-Aware Semi-Autoregressive Speculative Decoding for Multimodal Tasks2025-05-19