TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RGB Stream Is Enough for Temporal Action Detection

RGB Stream Is Enough for Temporal Action Detection

Chenhao Wang, Hongxiang Cai, Yuxin Zou, Yichao Xiong

2021-07-09Action DetectionOptical Flow EstimationData AugmentationTemporal Action Localization
PaperPDFCode(official)

Abstract

State-of-the-art temporal action detectors to date are based on two-stream input including RGB frames and optical flow. Although combining RGB frames and optical flow boosts performance significantly, optical flow is a hand-designed representation which not only requires heavy computation, but also makes it methodologically unsatisfactory that two-stream methods are often not learned end-to-end jointly with the flow. In this paper, we argue that optical flow is dispensable in high-accuracy temporal action detection and image level data augmentation (ILDA) is the key solution to avoid performance degradation when optical flow is removed. To evaluate the effectiveness of ILDA, we design a simple yet efficient one-stage temporal action detector based on single RGB stream named DaoTAD. Our results show that when trained with ILDA, DaoTAD has comparable accuracy with all existing state-of-the-art two-stream detectors while surpassing the inference speed of previous methods by a large margin and the inference speed is astounding 6668 fps on GeForce GTX 1080 Ti. Code is available at \url{https://github.com/Media-Smart/vedatad}.

Results

TaskDatasetMetricValueModel
VideoTHUMOS’14Avg mAP (0.3:0.7)50DaoTAD
VideoTHUMOS’14mAP IOU@0.362.8DaoTAD
VideoTHUMOS’14mAP IOU@0.459.5DaoTAD
VideoTHUMOS’14mAP IOU@0.553.8DaoTAD
VideoTHUMOS’14mAP IOU@0.643.6DaoTAD
VideoTHUMOS’14mAP IOU@0.730.1DaoTAD
Temporal Action LocalizationTHUMOS’14Avg mAP (0.3:0.7)50DaoTAD
Temporal Action LocalizationTHUMOS’14mAP IOU@0.362.8DaoTAD
Temporal Action LocalizationTHUMOS’14mAP IOU@0.459.5DaoTAD
Temporal Action LocalizationTHUMOS’14mAP IOU@0.553.8DaoTAD
Temporal Action LocalizationTHUMOS’14mAP IOU@0.643.6DaoTAD
Temporal Action LocalizationTHUMOS’14mAP IOU@0.730.1DaoTAD
Zero-Shot LearningTHUMOS’14Avg mAP (0.3:0.7)50DaoTAD
Zero-Shot LearningTHUMOS’14mAP IOU@0.362.8DaoTAD
Zero-Shot LearningTHUMOS’14mAP IOU@0.459.5DaoTAD
Zero-Shot LearningTHUMOS’14mAP IOU@0.553.8DaoTAD
Zero-Shot LearningTHUMOS’14mAP IOU@0.643.6DaoTAD
Zero-Shot LearningTHUMOS’14mAP IOU@0.730.1DaoTAD
Action LocalizationTHUMOS’14Avg mAP (0.3:0.7)50DaoTAD
Action LocalizationTHUMOS’14mAP IOU@0.362.8DaoTAD
Action LocalizationTHUMOS’14mAP IOU@0.459.5DaoTAD
Action LocalizationTHUMOS’14mAP IOU@0.553.8DaoTAD
Action LocalizationTHUMOS’14mAP IOU@0.643.6DaoTAD
Action LocalizationTHUMOS’14mAP IOU@0.730.1DaoTAD

Related Papers

Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14AI-Enhanced Pediatric Pneumonia Detection: A CNN-Based Approach Using Data Augmentation and Generative Adversarial Networks (GANs)2025-07-13