TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Two-Stream Consensus Network for Weakly-Supervised Tempora...

Two-Stream Consensus Network for Weakly-Supervised Temporal Action Localization

Yuanhao Zhai, Le Wang, Wei Tang, Qilin Zhang, Junsong Yuan, Gang Hua

2020-10-22ECCV 2020 8Weakly Supervised Action LocalizationAction LocalizationWeakly-supervised Temporal Action LocalizationTemporal Action LocalizationVocal Bursts Valence Prediction
PaperPDF

Abstract

Weakly-supervised Temporal Action Localization (W-TAL) aims to classify and localize all action instances in an untrimmed video under only video-level supervision. However, without frame-level annotations, it is challenging for W-TAL methods to identify false positive action proposals and generate action proposals with precise temporal boundaries. In this paper, we present a Two-Stream Consensus Network (TSCN) to simultaneously address these challenges. The proposed TSCN features an iterative refinement training method, where a frame-level pseudo ground truth is iteratively updated, and used to provide frame-level supervision for improved model training and false positive action proposal elimination. Furthermore, we propose a new attention normalization loss to encourage the predicted attention to act like a binary selection, and promote the precise localization of action instance boundaries. Experiments conducted on the THUMOS14 and ActivityNet datasets show that the proposed TSCN outperforms current state-of-the-art methods, and even achieves comparable results with some recent fully-supervised methods.

Results

TaskDatasetMetricValueModel
VideoTHUMOS14avg-mAP (0.1-0.5)47TSCN
VideoTHUMOS14avg-mAP (0.1:0.7)37.8TSCN
VideoTHUMOS14avg-mAP (0.3-0.7)28.8TSCN
Temporal Action LocalizationTHUMOS14avg-mAP (0.1-0.5)47TSCN
Temporal Action LocalizationTHUMOS14avg-mAP (0.1:0.7)37.8TSCN
Temporal Action LocalizationTHUMOS14avg-mAP (0.3-0.7)28.8TSCN
Zero-Shot LearningTHUMOS14avg-mAP (0.1-0.5)47TSCN
Zero-Shot LearningTHUMOS14avg-mAP (0.1:0.7)37.8TSCN
Zero-Shot LearningTHUMOS14avg-mAP (0.3-0.7)28.8TSCN
Action LocalizationTHUMOS14avg-mAP (0.1-0.5)47TSCN
Action LocalizationTHUMOS14avg-mAP (0.1:0.7)37.8TSCN
Action LocalizationTHUMOS14avg-mAP (0.3-0.7)28.8TSCN
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.1-0.5)47TSCN
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.1:0.7)37.8TSCN
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.3-0.7)28.8TSCN

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04A Review on Coarse to Fine-Grained Animal Action Recognition2025-06-01LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30CLIP-AE: CLIP-assisted Cross-view Audio-Visual Enhancement for Unsupervised Temporal Action Localization2025-05-29DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition2025-05-27ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization2025-05-23