TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Cross-modal Consensus Network for Weakly Supervised Tempor...

Cross-modal Consensus Network for Weakly Supervised Temporal Action Localization

Fa-Ting Hong, Jia-Chang Feng, Dan Xu, Ying Shan, Wei-Shi Zheng

2021-07-27Weakly Supervised Action LocalizationAction LocalizationWeakly-supervised Temporal Action LocalizationTemporal Action Localization
PaperPDFCodeCode

Abstract

Weakly supervised temporal action localization (WS-TAL) is a challenging task that aims to localize action instances in the given video with video-level categorical supervision. Both appearance and motion features are used in previous works, while they do not utilize them in a proper way but apply simple concatenation or score-level fusion. In this work, we argue that the features extracted from the pretrained extractor, e.g., I3D, are not the WS-TALtask-specific features, thus the feature re-calibration is needed for reducing the task-irrelevant information redundancy. Therefore, we propose a cross-modal consensus network (CO2-Net) to tackle this problem. In CO2-Net, we mainly introduce two identical proposed cross-modal consensus modules (CCM) that design a cross-modal attention mechanism to filter out the task-irrelevant information redundancy using the global information from the main modality and the cross-modal local information of the auxiliary modality. Moreover, we treat the attention weights derived from each CCMas the pseudo targets of the attention weights derived from another CCM to maintain the consistency between the predictions derived from two CCMs, forming a mutual learning manner. Finally, we conduct extensive experiments on two common used temporal action localization datasets, THUMOS14 and ActivityNet1.2, to verify our method and achieve the state-of-the-art results. The experimental results show that our proposed cross-modal consensus module can produce more representative features for temporal action localization.

Results

TaskDatasetMetricValueModel
VideoTHUMOS’14mAP IOU@0.170.1CO2-Net
VideoTHUMOS’14mAP IOU@0.263.6CO2-Net
VideoTHUMOS’14mAP IOU@0.354.5CO2-Net
VideoTHUMOS’14mAP IOU@0.445.7CO2-Net
VideoTHUMOS’14mAP IOU@0.538.3CO2-Net
VideoTHUMOS’14mAP IOU@0.626.4CO2-Net
VideoTHUMOS’14mAP IOU@0.713.4CO2-Net
VideoTHUMOS’14mAP IOU@0.86.9CO2-Net
VideoTHUMOS’14mAP IOU@0.92CO2-Net
VideoTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
VideoTHUMOS’14mAP IOU@0.538.3CO2-Net
VideoTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
VideoTHUMOS 2014mAP@0.1:0.554.4CO2-Net
VideoTHUMOS 2014mAP@0.1:0.744.6CO2-Net
VideoTHUMOS 2014mAP@0.538.3CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.170.1CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.263.6CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.354.5CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.445.7CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.538.3CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.626.4CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.713.4CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.86.9CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.92CO2-Net
Temporal Action LocalizationTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Temporal Action LocalizationTHUMOS’14mAP IOU@0.538.3CO2-Net
Temporal Action LocalizationTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Temporal Action LocalizationTHUMOS 2014mAP@0.1:0.554.4CO2-Net
Temporal Action LocalizationTHUMOS 2014mAP@0.1:0.744.6CO2-Net
Temporal Action LocalizationTHUMOS 2014mAP@0.538.3CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.170.1CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.263.6CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.354.5CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.445.7CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.538.3CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.626.4CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.713.4CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.86.9CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.92CO2-Net
Zero-Shot LearningTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Zero-Shot LearningTHUMOS’14mAP IOU@0.538.3CO2-Net
Zero-Shot LearningTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Zero-Shot LearningTHUMOS 2014mAP@0.1:0.554.4CO2-Net
Zero-Shot LearningTHUMOS 2014mAP@0.1:0.744.6CO2-Net
Zero-Shot LearningTHUMOS 2014mAP@0.538.3CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.170.1CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.263.6CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.354.5CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.445.7CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.538.3CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.626.4CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.713.4CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.86.9CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.92CO2-Net
Action LocalizationTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Action LocalizationTHUMOS’14mAP IOU@0.538.3CO2-Net
Action LocalizationTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Action LocalizationTHUMOS 2014mAP@0.1:0.554.4CO2-Net
Action LocalizationTHUMOS 2014mAP@0.1:0.744.6CO2-Net
Action LocalizationTHUMOS 2014mAP@0.538.3CO2-Net
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.1:0.554.4CO2-Net
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.1:0.744.6CO2-Net
Weakly Supervised Action LocalizationTHUMOS 2014mAP@0.538.3CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.170.1CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.263.6CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.354.5CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.445.7CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.538.3CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.626.4CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.713.4CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.86.9CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.92CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP IOU@0.538.3CO2-Net
Weakly-supervised Temporal Action LocalizationTHUMOS’14mAP@AVG(0.1:0.9)35.7CO2-Net

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04A Review on Coarse to Fine-Grained Animal Action Recognition2025-06-01LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30CLIP-AE: CLIP-assisted Cross-view Audio-Visual Enhancement for Unsupervised Temporal Action Localization2025-05-29DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition2025-05-27ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization2025-05-23