TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/FlashVTG: Feature Layering and Adaptive Score Handling Net...

FlashVTG: Feature Layering and Adaptive Score Handling Network for Video Temporal Grounding

Zhuo Cao, Bingqing Zhang, Heming Du, Xin Yu, Xue Li, Sen Wang

2024-12-18Highlight DetectionMoment RetrievalVideo UnderstandingRetrievalNatural Language Moment Retrieval
PaperPDFCode(official)

Abstract

Text-guided Video Temporal Grounding (VTG) aims to localize relevant segments in untrimmed videos based on textual descriptions, encompassing two subtasks: Moment Retrieval (MR) and Highlight Detection (HD). Although previous typical methods have achieved commendable results, it is still challenging to retrieve short video moments. This is primarily due to the reliance on sparse and limited decoder queries, which significantly constrain the accuracy of predictions. Furthermore, suboptimal outcomes often arise because previous methods rank predictions based on isolated predictions, neglecting the broader video context. To tackle these issues, we introduce FlashVTG, a framework featuring a Temporal Feature Layering (TFL) module and an Adaptive Score Refinement (ASR) module. The TFL module replaces the traditional decoder structure to capture nuanced video content variations across multiple temporal scales, while the ASR module improves prediction ranking by integrating context from adjacent moments and multi-temporal-scale features. Extensive experiments demonstrate that FlashVTG achieves state-of-the-art performance on four widely adopted datasets in both MR and HD. Specifically, on the QVHighlights dataset, it boosts mAP by 5.8% for MR and 3.3% for HD. For short-moment retrieval, FlashVTG increases mAP to 125% of previous SOTA performance. All these improvements are made without adding training burdens, underscoring its effectiveness. Our code is available at https://github.com/Zhuo-Cao/FlashVTG.

Results

TaskDatasetMetricValueModel
VideoTACoSR@1,IoU=0.353.71FlashVTG
VideoTACoSR@1,IoU=0.541.76FlashVTG
VideoTACoSR@1,IoU=0.724.74FlashVTG
VideoTACoSmIoU37.61FlashVTG
Moment RetrievalCharades-STAR@1 IoU=0.570.32FlashVTG
Moment RetrievalCharades-STAR@1 IoU=0.749.87FlashVTG
Moment RetrievalQVHighlightsR@1 IoU=0.570.69FlashVTG
Moment RetrievalQVHighlightsR@1 IoU=0.753.96FlashVTG
Moment RetrievalQVHighlightsmAP52FlashVTG
Moment RetrievalQVHighlightsmAP@0.572.33FlashVTG
Moment RetrievalQVHighlightsmAP@0.7553.85FlashVTG
Highlight DetectionTvSummAP88FlashVTG
Highlight DetectionYouTube HighlightsmAP75.4FlashVTG
Highlight DetectionQVHighlightsHit@171.01FlashVTG
Highlight DetectionQVHighlightsmAP44.09FlashVTG
16kTvSummAP88FlashVTG
16kYouTube HighlightsmAP75.4FlashVTG
16kQVHighlightsHit@171.01FlashVTG
16kQVHighlightsmAP44.09FlashVTG

Related Papers

VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16