TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DCAN: Improving Temporal Action Detection via Dual Context...

DCAN: Improving Temporal Action Detection via Dual Context Aggregation

Guo Chen, Yin-Dong Zheng, LiMin Wang, Tong Lu

2021-12-07Action DetectionTemporal Action Localization
PaperPDFCode(official)

Abstract

Temporal action detection aims to locate the boundaries of action in the video. The current method based on boundary matching enumerates and calculates all possible boundary matchings to generate proposals. However, these methods neglect the long-range context aggregation in boundary prediction. At the same time, due to the similar semantics of adjacent matchings, local semantic aggregation of densely-generated matchings cannot improve semantic richness and discrimination. In this paper, we propose the end-to-end proposal generation method named Dual Context Aggregation Network (DCAN) to aggregate context on two levels, namely, boundary level and proposal level, for generating high-quality action proposals, thereby improving the performance of temporal action detection. Specifically, we design the Multi-Path Temporal Context Aggregation (MTCA) to achieve smooth context aggregation on boundary level and precise evaluation of boundaries. For matching evaluation, Coarse-to-fine Matching (CFM) is designed to aggregate context on the proposal level and refine the matching map from coarse to fine. We conduct extensive experiments on ActivityNet v1.3 and THUMOS-14. DCAN obtains an average mAP of 35.39% on ActivityNet v1.3 and reaches mAP 54.14% at IoU@0.5 on THUMOS-14, which demonstrates DCAN can generate high-quality proposals and achieve state-of-the-art performance. We release the code at https://github.com/cg1177/DCAN.

Results

TaskDatasetMetricValueModel
VideoActivityNet-1.3mAP35.39DCAN (TSN features)
VideoActivityNet-1.3mAP IOU@0.551.78DCAN (TSN features)
VideoActivityNet-1.3mAP IOU@0.7535.98DCAN (TSN features)
VideoActivityNet-1.3mAP IOU@0.959.45DCAN (TSN features)
VideoTHUMOS’14Avg mAP (0.3:0.7)52.3DCAN (TSN features)
VideoTHUMOS’14mAP IOU@0.368.2DCAN (TSN features)
VideoTHUMOS’14mAP IOU@0.462.7DCAN (TSN features)
VideoTHUMOS’14mAP IOU@0.554.1DCAN (TSN features)
VideoTHUMOS’14mAP IOU@0.643.9DCAN (TSN features)
VideoTHUMOS’14mAP IOU@0.732.6DCAN (TSN features)
Temporal Action LocalizationActivityNet-1.3mAP35.39DCAN (TSN features)
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.551.78DCAN (TSN features)
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.7535.98DCAN (TSN features)
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.959.45DCAN (TSN features)
Temporal Action LocalizationTHUMOS’14Avg mAP (0.3:0.7)52.3DCAN (TSN features)
Temporal Action LocalizationTHUMOS’14mAP IOU@0.368.2DCAN (TSN features)
Temporal Action LocalizationTHUMOS’14mAP IOU@0.462.7DCAN (TSN features)
Temporal Action LocalizationTHUMOS’14mAP IOU@0.554.1DCAN (TSN features)
Temporal Action LocalizationTHUMOS’14mAP IOU@0.643.9DCAN (TSN features)
Temporal Action LocalizationTHUMOS’14mAP IOU@0.732.6DCAN (TSN features)
Zero-Shot LearningActivityNet-1.3mAP35.39DCAN (TSN features)
Zero-Shot LearningActivityNet-1.3mAP IOU@0.551.78DCAN (TSN features)
Zero-Shot LearningActivityNet-1.3mAP IOU@0.7535.98DCAN (TSN features)
Zero-Shot LearningActivityNet-1.3mAP IOU@0.959.45DCAN (TSN features)
Zero-Shot LearningTHUMOS’14Avg mAP (0.3:0.7)52.3DCAN (TSN features)
Zero-Shot LearningTHUMOS’14mAP IOU@0.368.2DCAN (TSN features)
Zero-Shot LearningTHUMOS’14mAP IOU@0.462.7DCAN (TSN features)
Zero-Shot LearningTHUMOS’14mAP IOU@0.554.1DCAN (TSN features)
Zero-Shot LearningTHUMOS’14mAP IOU@0.643.9DCAN (TSN features)
Zero-Shot LearningTHUMOS’14mAP IOU@0.732.6DCAN (TSN features)
Action LocalizationActivityNet-1.3mAP35.39DCAN (TSN features)
Action LocalizationActivityNet-1.3mAP IOU@0.551.78DCAN (TSN features)
Action LocalizationActivityNet-1.3mAP IOU@0.7535.98DCAN (TSN features)
Action LocalizationActivityNet-1.3mAP IOU@0.959.45DCAN (TSN features)
Action LocalizationTHUMOS’14Avg mAP (0.3:0.7)52.3DCAN (TSN features)
Action LocalizationTHUMOS’14mAP IOU@0.368.2DCAN (TSN features)
Action LocalizationTHUMOS’14mAP IOU@0.462.7DCAN (TSN features)
Action LocalizationTHUMOS’14mAP IOU@0.554.1DCAN (TSN features)
Action LocalizationTHUMOS’14mAP IOU@0.643.9DCAN (TSN features)
Action LocalizationTHUMOS’14mAP IOU@0.732.6DCAN (TSN features)

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16CBF-AFA: Chunk-Based Multi-SSL Fusion for Automatic Fluency Assessment2025-06-25MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Distributed Activity Detection for Cell-Free Hybrid Near-Far Field Communications2025-06-17Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04Speaker Diarization with Overlapping Community Detection Using Graph Attention Networks and Label Propagation Algorithm2025-06-03Attention Is Not Always the Answer: Optimizing Voice Activity Detection with Simple Feature Fusion2025-06-02