TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LD-DETR: Loop Decoder DEtection TRansformer for Video Mome...

LD-DETR: Loop Decoder DEtection TRansformer for Video Moment Retrieval and Highlight Detection

Pengcheng Zhao, Zhixian He, Fuwei Zhang, Shujin Lin, Fan Zhou

2025-01-18Highlight DetectionContrastive LearningMoment RetrievalRetrievalNatural Language Moment Retrieval
PaperPDFCode(official)

Abstract

Video Moment Retrieval and Highlight Detection aim to find corresponding content in the video based on a text query. Existing models usually first use contrastive learning methods to align video and text features, then fuse and extract multimodal information, and finally use a Transformer Decoder to decode multimodal information. However, existing methods face several issues: (1) Overlapping semantic information between different samples in the dataset hinders the model's multimodal aligning performance; (2) Existing models are not able to efficiently extract local features of the video; (3) The Transformer Decoder used by the existing model cannot adequately decode multimodal features. To address the above issues, we proposed the LD-DETR model for Video Moment Retrieval and Highlight Detection tasks. Specifically, we first distilled the similarity matrix into the identity matrix to mitigate the impact of overlapping semantic information. Then, we designed a method that enables convolutional layers to extract multimodal local features more efficiently. Finally, we fed the output of the Transformer Decoder back into itself to adequately decode multimodal information. We evaluated LD-DETR on four public benchmarks and conducted extensive experiments to demonstrate the superiority and effectiveness of our approach. Our model outperforms the State-Of-The-Art models on QVHighlight, Charades-STA and TACoS datasets. Our code is available at https://github.com/qingchen239/ld-detr.

Results

TaskDatasetMetricValueModel
VideoTACoSR@1,IoU=0.357.61LD-DETR
VideoTACoSR@1,IoU=0.544.31LD-DETR
VideoTACoSR@1,IoU=0.726.24LD-DETR
VideoTACoSmIoU40.3LD-DETR
Moment RetrievalCharades-STAR@1 IoU=0.373.92LD-DETR
Moment RetrievalCharades-STAR@1 IoU=0.562.58LD-DETR
Moment RetrievalCharades-STAR@1 IoU=0.741.56LD-DETR
Moment RetrievalCharades-STAmIoU53.44LD-DETR
Moment RetrievalQVHighlightsR@1 IoU=0.566.8LD-DETR
Moment RetrievalQVHighlightsR@1 IoU=0.751.04LD-DETR
Moment RetrievalQVHighlightsmAP46.41LD-DETR
Moment RetrievalQVHighlightsmAP@0.567.61LD-DETR
Moment RetrievalQVHighlightsmAP@0.7546.99LD-DETR

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16