TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/UnLoc: A Unified Framework for Video Localization Tasks

UnLoc: A Unified Framework for Video Localization Tasks

Shen Yan, Xuehan Xiong, Arsha Nagrani, Anurag Arnab, Zhonghao Wang, Weina Ge, David Ross, Cordelia Schmid

2023-08-21ICCV 2023 1Action SegmentationZero-Shot Action DetectionMoment RetrievalTemporal LocalizationRetrievalTemporal Action LocalizationNatural Language Moment Retrieval
PaperPDFCode(official)

Abstract

While large-scale image-text pretrained models such as CLIP have been used for multiple video-level tasks on trimmed videos, their use for temporal localization in untrimmed videos is still a relatively unexplored task. We design a new approach for this called UnLoc, which uses pretrained image and text towers, and feeds tokens to a video-text fusion model. The output of the fusion module are then used to construct a feature pyramid in which each level connects to a head to predict a per-frame relevancy score and start/end time displacements. Unlike previous works, our architecture enables Moment Retrieval, Temporal Localization, and Action Segmentation with a single stage model, without the need for action proposals, motion based pretrained features or representation masking. Unlike specialized models, we achieve state of the art results on all three different localization tasks with a unified approach. Code will be available at: \url{https://github.com/google-research/scenic}.

Results

TaskDatasetMetricValueModel
VideoActivityNet-1.3mAP IOU@0.559.3UnLoc-L
VideoActivityNet CaptionsR@1,IoU=0.548.3UnLoc-L
VideoActivityNet CaptionsR@1,IoU=0.730.2UnLoc-L
VideoActivityNet CaptionsR@5,IoU=0.579.2UnLoc-L
VideoActivityNet CaptionsR@5,IoU=0.761.3UnLoc-L
VideoActivityNet CaptionsR@1,IoU=0.548UnLoc-B
VideoActivityNet CaptionsR@1,IoU=0.729.7UnLoc-B
VideoActivityNet CaptionsR@5,IoU=0.581.5UnLoc-B
VideoActivityNet CaptionsR@5,IoU=0.761.4UnLoc-B
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.559.3UnLoc-L
Zero-Shot LearningActivityNet-1.3mAP IOU@0.559.3UnLoc-L
Action LocalizationActivityNet-1.3mAP IOU@0.559.3UnLoc-L
Action LocalizationCOINFrame accuracy72.8UnLoc-L
Action SegmentationCOINFrame accuracy72.8UnLoc-L
Moment RetrievalCharades-STAR@1 IoU=0.560.8UnLoc-L
Moment RetrievalCharades-STAR@1 IoU=0.738.4UnLoc-L
Moment RetrievalCharades-STAR@5 IoU=0.588.2UnLoc-L
Moment RetrievalCharades-STAR@5 IoU=0.761.1UnLoc-L
Moment RetrievalCharades-STAR@1 IoU=0.558.1UnLoc-B
Moment RetrievalCharades-STAR@1 IoU=0.735.4UnLoc-B
Moment RetrievalCharades-STAR@5 IoU=0.587.4UnLoc-B
Moment RetrievalCharades-STAR@5 IoU=0.759.1UnLoc-B
Moment RetrievalQVHighlightsR@1 IoU=0.566.1UnLoc-L
Moment RetrievalQVHighlightsR@1 IoU=0.746.7UnLoc-L
Moment RetrievalQVHighlightsR@1 IoU=0.564.5UnLoc-B
Moment RetrievalQVHighlightsR@1 IoU=0.748.8UnLoc-B

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16