TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HR-Pro: Point-supervised Temporal Action Localization via ...

HR-Pro: Point-supervised Temporal Action Localization via Hierarchical Reliability Propagation

Huaxin Zhang, Xiang Wang, Xiaohao Xu, Zhiwu Qing, Changxin Gao, Nong Sang

2023-08-24Weakly Supervised Action LocalizationAction LocalizationTemporal Action Localization
PaperPDFCode(official)

Abstract

Point-supervised Temporal Action Localization (PSTAL) is an emerging research direction for label-efficient learning. However, current methods mainly focus on optimizing the network either at the snippet-level or the instance-level, neglecting the inherent reliability of point annotations at both levels. In this paper, we propose a Hierarchical Reliability Propagation (HR-Pro) framework, which consists of two reliability-aware stages: Snippet-level Discrimination Learning and Instance-level Completeness Learning, both stages explore the efficient propagation of high-confidence cues in point annotations. For snippet-level learning, we introduce an online-updated memory to store reliable snippet prototypes for each class. We then employ a Reliability-aware Attention Block to capture both intra-video and inter-video dependencies of snippets, resulting in more discriminative and robust snippet representation. For instance-level learning, we propose a point-based proposal generation approach as a means of connecting snippets and instances, which produces high-confidence proposals for further optimization at the instance level. Through multi-level reliability-aware learning, we obtain more reliable confidence scores and more accurate temporal boundaries of predicted proposals. Our HR-Pro achieves state-of-the-art performance on multiple challenging benchmarks, including an impressive average mAP of 60.3% on THUMOS14. Notably, our HR-Pro largely surpasses all previous point-supervised methods, and even outperforms several competitive fully supervised methods. Code will be available at https://github.com/pipixin321/HR-Pro.

Results

TaskDatasetMetricValueModel
VideoGTEAmAP@0.1:0.747.3HR-Pro
VideoGTEAmAP@0.537.3HR-Pro
VideoBEOIDmAP@0.1:0.759.4HR-Pro
VideoBEOIDmAP@0.555.3HR-Pro
VideoTHUMOS 2014mAP@0.1:0.571.6HR-Pro
VideoTHUMOS 2014mAP@0.1:0.760.3HR-Pro
VideoTHUMOS 2014mAP@0.552.2HR-Pro
VideoTHUMOS14avg-mAP (0.1-0.5)71.6HR-Pro
VideoTHUMOS14avg-mAP (0.1:0.7)60.3HR-Pro
VideoTHUMOS14avg-mAP (0.3-0.7)51.1HR-Pro
Temporal Action LocalizationGTEAmAP@0.1:0.747.3HR-Pro
Temporal Action LocalizationGTEAmAP@0.537.3HR-Pro
Temporal Action LocalizationBEOIDmAP@0.1:0.759.4HR-Pro
Temporal Action LocalizationBEOIDmAP@0.555.3HR-Pro
Temporal Action LocalizationTHUMOS 2014mAP@0.1:0.571.6HR-Pro
Temporal Action LocalizationTHUMOS 2014mAP@0.1:0.760.3HR-Pro
Temporal Action LocalizationTHUMOS 2014mAP@0.552.2HR-Pro
Temporal Action LocalizationTHUMOS14avg-mAP (0.1-0.5)71.6HR-Pro
Temporal Action LocalizationTHUMOS14avg-mAP (0.1:0.7)60.3HR-Pro
Temporal Action LocalizationTHUMOS14avg-mAP (0.3-0.7)51.1HR-Pro
Zero-Shot LearningGTEAmAP@0.1:0.747.3HR-Pro
Zero-Shot LearningGTEAmAP@0.537.3HR-Pro
Zero-Shot LearningBEOIDmAP@0.1:0.759.4HR-Pro
Zero-Shot LearningBEOIDmAP@0.555.3HR-Pro
Zero-Shot LearningTHUMOS 2014mAP@0.1:0.571.6HR-Pro
Zero-Shot LearningTHUMOS 2014mAP@0.1:0.760.3HR-Pro
Zero-Shot LearningTHUMOS 2014mAP@0.552.2HR-Pro
Zero-Shot LearningTHUMOS14avg-mAP (0.1-0.5)71.6HR-Pro
Zero-Shot LearningTHUMOS14avg-mAP (0.1:0.7)60.3HR-Pro
Zero-Shot LearningTHUMOS14avg-mAP (0.3-0.7)51.1HR-Pro
Action LocalizationGTEAmAP@0.1:0.747.3HR-Pro
Action LocalizationGTEAmAP@0.537.3HR-Pro
Action LocalizationBEOIDmAP@0.1:0.759.4HR-Pro
Action LocalizationBEOIDmAP@0.555.3HR-Pro
Action LocalizationTHUMOS 2014mAP@0.1:0.571.6HR-Pro
Action LocalizationTHUMOS 2014mAP@0.1:0.760.3HR-Pro
Action LocalizationTHUMOS 2014mAP@0.552.2HR-Pro
Action LocalizationTHUMOS14avg-mAP (0.1-0.5)71.6HR-Pro
Action LocalizationTHUMOS14avg-mAP (0.1:0.7)60.3HR-Pro
Action LocalizationTHUMOS14avg-mAP (0.3-0.7)51.1HR-Pro
Weakly Supervised Action LocalizationGTEAmAP@0.1:0.747.3HR-Pro
Weakly Supervised Action LocalizationGTEAmAP@0.537.3HR-Pro
Weakly Supervised Action LocalizationBEOIDmAP@0.1:0.759.4HR-Pro
Weakly Supervised Action LocalizationBEOIDmAP@0.555.3HR-Pro
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.1:0.571.6HR-Pro
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.1:0.760.3HR-Pro
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.552.2HR-Pro
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.1-0.5)71.6HR-Pro
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.1:0.7)60.3HR-Pro
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.3-0.7)51.1HR-Pro

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04A Review on Coarse to Fine-Grained Animal Action Recognition2025-06-01LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30CLIP-AE: CLIP-assisted Cross-view Audio-Visual Enhancement for Unsupervised Temporal Action Localization2025-05-29DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition2025-05-27ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization2025-05-23