TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ACM-Net: Action Context Modeling Network for Weakly-Superv...

ACM-Net: Action Context Modeling Network for Weakly-Supervised Temporal Action Localization

Sanqing Qu, Guang Chen, Zhijun Li, Lijun Zhang, Fan Lu, Alois Knoll

2021-04-07Weakly Supervised Action LocalizationAction LocalizationWeakly-supervised Temporal Action LocalizationTemporal Action Localization
PaperPDFCodeCode(official)

Abstract

Weakly-supervised temporal action localization aims to localize action instances temporal boundary and identify the corresponding action category with only video-level labels. Traditional methods mainly focus on foreground and background frames separation with only a single attention branch and class activation sequence. However, we argue that apart from the distinctive foreground and background frames there are plenty of semantically ambiguous action context frames. It does not make sense to group those context frames to the same background class since they are semantically related to a specific action category. Consequently, it is challenging to suppress action context frames with only a single class activation sequence. To address this issue, in this paper, we propose an action-context modeling network termed ACM-Net, which integrates a three-branch attention module to measure the likelihood of each temporal point being action instance, context, or non-action background, simultaneously. Then based on the obtained three-branch attention values, we construct three-branch class activation sequences to represent the action instances, contexts, and non-action backgrounds, individually. To evaluate the effectiveness of our ACM-Net, we conduct extensive experiments on two benchmark datasets, THUMOS-14 and ActivityNet-1.3. The experiments show that our method can outperform current state-of-the-art methods, and even achieve comparable performance with fully-supervised methods. Code can be found at https://github.com/ispc-lab/ACM-Net

Results

TaskDatasetMetricValueModel
VideoTHUMOS 2014mAP@0.1:0.553.2ACM-Net
VideoTHUMOS 2014mAP@0.1:0.742.6ACM-Net
VideoTHUMOS 2014mAP@0.534.6ACM-Net
VideoTHUMOS14avg-mAP (0.1-0.5)53.2ACM-Net
VideoTHUMOS14avg-mAP (0.1:0.7)42.6ACM-Net
VideoTHUMOS14avg-mAP (0.3-0.7)33.4ACM-Net
VideoTHUMOS’14mAP@0.534.6ACM-Net
VideoActivityNet-1.3mAP@0.540.1ACM-Net
VideoActivityNet-1.3mAP@0.5:0.9524.6ACM-Net
Temporal Action LocalizationTHUMOS 2014mAP@0.1:0.553.2ACM-Net
Temporal Action LocalizationTHUMOS 2014mAP@0.1:0.742.6ACM-Net
Temporal Action LocalizationTHUMOS 2014mAP@0.534.6ACM-Net
Temporal Action LocalizationTHUMOS14avg-mAP (0.1-0.5)53.2ACM-Net
Temporal Action LocalizationTHUMOS14avg-mAP (0.1:0.7)42.6ACM-Net
Temporal Action LocalizationTHUMOS14avg-mAP (0.3-0.7)33.4ACM-Net
Temporal Action LocalizationTHUMOS’14mAP@0.534.6ACM-Net
Temporal Action LocalizationActivityNet-1.3mAP@0.540.1ACM-Net
Temporal Action LocalizationActivityNet-1.3mAP@0.5:0.9524.6ACM-Net
Zero-Shot LearningTHUMOS 2014mAP@0.1:0.553.2ACM-Net
Zero-Shot LearningTHUMOS 2014mAP@0.1:0.742.6ACM-Net
Zero-Shot LearningTHUMOS 2014mAP@0.534.6ACM-Net
Zero-Shot LearningTHUMOS14avg-mAP (0.1-0.5)53.2ACM-Net
Zero-Shot LearningTHUMOS14avg-mAP (0.1:0.7)42.6ACM-Net
Zero-Shot LearningTHUMOS14avg-mAP (0.3-0.7)33.4ACM-Net
Zero-Shot LearningTHUMOS’14mAP@0.534.6ACM-Net
Zero-Shot LearningActivityNet-1.3mAP@0.540.1ACM-Net
Zero-Shot LearningActivityNet-1.3mAP@0.5:0.9524.6ACM-Net
Action LocalizationTHUMOS 2014mAP@0.1:0.553.2ACM-Net
Action LocalizationTHUMOS 2014mAP@0.1:0.742.6ACM-Net
Action LocalizationTHUMOS 2014mAP@0.534.6ACM-Net
Action LocalizationTHUMOS14avg-mAP (0.1-0.5)53.2ACM-Net
Action LocalizationTHUMOS14avg-mAP (0.1:0.7)42.6ACM-Net
Action LocalizationTHUMOS14avg-mAP (0.3-0.7)33.4ACM-Net
Action LocalizationTHUMOS’14mAP@0.534.6ACM-Net
Action LocalizationActivityNet-1.3mAP@0.540.1ACM-Net
Action LocalizationActivityNet-1.3mAP@0.5:0.9524.6ACM-Net
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.1:0.553.2ACM-Net
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.1:0.742.6ACM-Net
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.534.6ACM-Net
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.1-0.5)53.2ACM-Net
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.1:0.7)42.6ACM-Net
Weakly Supervised Action LocalizationTHUMOS14avg-mAP (0.3-0.7)33.4ACM-Net
Weakly Supervised Action LocalizationTHUMOS’14mAP@0.534.6ACM-Net
Weakly Supervised Action LocalizationActivityNet-1.3mAP@0.540.1ACM-Net
Weakly Supervised Action LocalizationActivityNet-1.3mAP@0.5:0.9524.6ACM-Net

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04A Review on Coarse to Fine-Grained Animal Action Recognition2025-06-01LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30CLIP-AE: CLIP-assisted Cross-view Audio-Visual Enhancement for Unsupervised Temporal Action Localization2025-05-29DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition2025-05-27ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization2025-05-23