Dense-Captioning Events in Videos: SYSU Submission to ActivityNet Challenge 2020
Teng Wang, Huicheng Zheng, Mingjing Yu
Abstract
This technical report presents a brief description of our submission to the dense video captioning task of ActivityNet Challenge 2020. Our approach follows a two-stage pipeline: first, we extract a set of temporal event proposals; then we propose a multi-event captioning model to capture the event-level temporal relationships and effectively fuse the multi-modal information. Our approach achieves a 9.28 METEOR score on the test set.
Results
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video Captioning | ActivityNet Captions | METEOR | 9.71 | TSRM-CMG-HRNN+SCST |
| Dense Video Captioning | ActivityNet Captions | METEOR | 9.71 | TSRM-CMG-HRNN+SCST |
Related Papers
UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15Show, Tell and Summarize: Dense Video Captioning Using Visual Cue Aided Sentence Summarization2025-06-25Dense Video Captioning using Graph-based Sentence Summarization2025-06-25video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language Models2025-06-18VersaVid-R1: A Versatile Video Understanding and Reasoning Model from Question Answering to Captioning Tasks2025-06-10ARGUS: Hallucination and Omission Evaluation in Video-LLMs2025-06-09STSBench: A Spatio-temporal Scenario Benchmark for Multi-modal Large Language Models in Autonomous Driving2025-06-06Does Your 3D Encoder Really Work? When Pretrain-SFT from 2D VLMs Meets 3D VLMs2025-06-05