TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CDC: Convolutional-De-Convolutional Networks for Precise T...

CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos

Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, Shih-Fu Chang

2017-03-04CVPR 2017 7Action LocalizationTemporal Action Localization
PaperPDFCode(official)

Abstract

Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. We will update the camera-ready version and publish the source codes online soon.

Results

TaskDatasetMetricValueModel
VideoTHUMOS’14mAP IOU@0.340.1CDC
VideoTHUMOS’14mAP IOU@0.429.4CDC
VideoTHUMOS’14mAP IOU@0.523.3CDC
VideoTHUMOS’14mAP IOU@0.613.1CDC
VideoTHUMOS’14mAP IOU@0.77.9CDC
Temporal Action LocalizationTHUMOS’14mAP IOU@0.340.1CDC
Temporal Action LocalizationTHUMOS’14mAP IOU@0.429.4CDC
Temporal Action LocalizationTHUMOS’14mAP IOU@0.523.3CDC
Temporal Action LocalizationTHUMOS’14mAP IOU@0.613.1CDC
Temporal Action LocalizationTHUMOS’14mAP IOU@0.77.9CDC
Zero-Shot LearningTHUMOS’14mAP IOU@0.340.1CDC
Zero-Shot LearningTHUMOS’14mAP IOU@0.429.4CDC
Zero-Shot LearningTHUMOS’14mAP IOU@0.523.3CDC
Zero-Shot LearningTHUMOS’14mAP IOU@0.613.1CDC
Zero-Shot LearningTHUMOS’14mAP IOU@0.77.9CDC
Action LocalizationTHUMOS’14mAP IOU@0.340.1CDC
Action LocalizationTHUMOS’14mAP IOU@0.429.4CDC
Action LocalizationTHUMOS’14mAP IOU@0.523.3CDC
Action LocalizationTHUMOS’14mAP IOU@0.613.1CDC
Action LocalizationTHUMOS’14mAP IOU@0.77.9CDC

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04A Review on Coarse to Fine-Grained Animal Action Recognition2025-06-01LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30CLIP-AE: CLIP-assisted Cross-view Audio-Visual Enhancement for Unsupervised Temporal Action Localization2025-05-29DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition2025-05-27ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization2025-05-23