Chenhao Wang, Hongxiang Cai, Yuxin Zou, Yichao Xiong
State-of-the-art temporal action detectors to date are based on two-stream input including RGB frames and optical flow. Although combining RGB frames and optical flow boosts performance significantly, optical flow is a hand-designed representation which not only requires heavy computation, but also makes it methodologically unsatisfactory that two-stream methods are often not learned end-to-end jointly with the flow. In this paper, we argue that optical flow is dispensable in high-accuracy temporal action detection and image level data augmentation (ILDA) is the key solution to avoid performance degradation when optical flow is removed. To evaluate the effectiveness of ILDA, we design a simple yet efficient one-stage temporal action detector based on single RGB stream named DaoTAD. Our results show that when trained with ILDA, DaoTAD has comparable accuracy with all existing state-of-the-art two-stream detectors while surpassing the inference speed of previous methods by a large margin and the inference speed is astounding 6668 fps on GeForce GTX 1080 Ti. Code is available at \url{https://github.com/Media-Smart/vedatad}.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | THUMOS’14 | Avg mAP (0.3:0.7) | 50 | DaoTAD |
| Video | THUMOS’14 | mAP IOU@0.3 | 62.8 | DaoTAD |
| Video | THUMOS’14 | mAP IOU@0.4 | 59.5 | DaoTAD |
| Video | THUMOS’14 | mAP IOU@0.5 | 53.8 | DaoTAD |
| Video | THUMOS’14 | mAP IOU@0.6 | 43.6 | DaoTAD |
| Video | THUMOS’14 | mAP IOU@0.7 | 30.1 | DaoTAD |
| Temporal Action Localization | THUMOS’14 | Avg mAP (0.3:0.7) | 50 | DaoTAD |
| Temporal Action Localization | THUMOS’14 | mAP IOU@0.3 | 62.8 | DaoTAD |
| Temporal Action Localization | THUMOS’14 | mAP IOU@0.4 | 59.5 | DaoTAD |
| Temporal Action Localization | THUMOS’14 | mAP IOU@0.5 | 53.8 | DaoTAD |
| Temporal Action Localization | THUMOS’14 | mAP IOU@0.6 | 43.6 | DaoTAD |
| Temporal Action Localization | THUMOS’14 | mAP IOU@0.7 | 30.1 | DaoTAD |
| Zero-Shot Learning | THUMOS’14 | Avg mAP (0.3:0.7) | 50 | DaoTAD |
| Zero-Shot Learning | THUMOS’14 | mAP IOU@0.3 | 62.8 | DaoTAD |
| Zero-Shot Learning | THUMOS’14 | mAP IOU@0.4 | 59.5 | DaoTAD |
| Zero-Shot Learning | THUMOS’14 | mAP IOU@0.5 | 53.8 | DaoTAD |
| Zero-Shot Learning | THUMOS’14 | mAP IOU@0.6 | 43.6 | DaoTAD |
| Zero-Shot Learning | THUMOS’14 | mAP IOU@0.7 | 30.1 | DaoTAD |
| Action Localization | THUMOS’14 | Avg mAP (0.3:0.7) | 50 | DaoTAD |
| Action Localization | THUMOS’14 | mAP IOU@0.3 | 62.8 | DaoTAD |
| Action Localization | THUMOS’14 | mAP IOU@0.4 | 59.5 | DaoTAD |
| Action Localization | THUMOS’14 | mAP IOU@0.5 | 53.8 | DaoTAD |
| Action Localization | THUMOS’14 | mAP IOU@0.6 | 43.6 | DaoTAD |
| Action Localization | THUMOS’14 | mAP IOU@0.7 | 30.1 | DaoTAD |