TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Video Self-Stitching Graph Network for Temporal Action Loc...

Video Self-Stitching Graph Network for Temporal Action Localization

Chen Zhao, Ali Thabet, Bernard Ghanem

2020-11-30ICCV 2021 10Action LocalizationTemporal Action Localization
PaperPDFCode(official)

Abstract

Temporal action localization (TAL) in videos is a challenging task, especially due to the large variation in action temporal scales. Short actions usually occupy a major proportion in the datasets, but tend to have the lowest performance. In this paper, we confront the challenge of short actions and propose a multi-level cross-scale solution dubbed as video self-stitching graph network (VSGN). We have two key components in VSGN: video self-stitching (VSS) and cross-scale graph pyramid network (xGPN). In VSS, we focus on a short period of a video and magnify it along the temporal dimension to obtain a larger scale. We stitch the original clip and its magnified counterpart in one input sequence to take advantage of the complementary properties of both scales. The xGPN component further exploits the cross-scale correlations by a pyramid of cross-scale graph networks, each containing a hybrid module to aggregate features from across scales as well as within the same scale. Our VSGN not only enhances the feature representations, but also generates more positive anchors for short actions and more short training samples. Experiments demonstrate that VSGN obviously improves the localization performance of short actions as well as achieving the state-of-the-art overall performance on THUMOS-14 and ActivityNet-v1.3.

Results

TaskDatasetMetricValueModel
VideoActivityNet-1.3mAP35.94VSGN (TSP features)
VideoActivityNet-1.3mAP IOU@0.553.26VSGN (TSP features)
VideoActivityNet-1.3mAP IOU@0.7536.76VSGN (TSP features)
VideoActivityNet-1.3mAP IOU@0.958.12VSGN (TSP features)
VideoTHUMOS’14Avg mAP (0.3:0.7)50.2VSGN
VideoTHUMOS’14mAP IOU@0.366.7VSGN
VideoTHUMOS’14mAP IOU@0.460.4VSGN
VideoTHUMOS’14mAP IOU@0.552.4VSGN
VideoTHUMOS’14mAP IOU@0.641VSGN
VideoTHUMOS’14mAP IOU@0.730.4VSGN
Temporal Action LocalizationActivityNet-1.3mAP35.94VSGN (TSP features)
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.553.26VSGN (TSP features)
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.7536.76VSGN (TSP features)
Temporal Action LocalizationActivityNet-1.3mAP IOU@0.958.12VSGN (TSP features)
Temporal Action LocalizationTHUMOS’14Avg mAP (0.3:0.7)50.2VSGN
Temporal Action LocalizationTHUMOS’14mAP IOU@0.366.7VSGN
Temporal Action LocalizationTHUMOS’14mAP IOU@0.460.4VSGN
Temporal Action LocalizationTHUMOS’14mAP IOU@0.552.4VSGN
Temporal Action LocalizationTHUMOS’14mAP IOU@0.641VSGN
Temporal Action LocalizationTHUMOS’14mAP IOU@0.730.4VSGN
Zero-Shot LearningActivityNet-1.3mAP35.94VSGN (TSP features)
Zero-Shot LearningActivityNet-1.3mAP IOU@0.553.26VSGN (TSP features)
Zero-Shot LearningActivityNet-1.3mAP IOU@0.7536.76VSGN (TSP features)
Zero-Shot LearningActivityNet-1.3mAP IOU@0.958.12VSGN (TSP features)
Zero-Shot LearningTHUMOS’14Avg mAP (0.3:0.7)50.2VSGN
Zero-Shot LearningTHUMOS’14mAP IOU@0.366.7VSGN
Zero-Shot LearningTHUMOS’14mAP IOU@0.460.4VSGN
Zero-Shot LearningTHUMOS’14mAP IOU@0.552.4VSGN
Zero-Shot LearningTHUMOS’14mAP IOU@0.641VSGN
Zero-Shot LearningTHUMOS’14mAP IOU@0.730.4VSGN
Action LocalizationActivityNet-1.3mAP35.94VSGN (TSP features)
Action LocalizationActivityNet-1.3mAP IOU@0.553.26VSGN (TSP features)
Action LocalizationActivityNet-1.3mAP IOU@0.7536.76VSGN (TSP features)
Action LocalizationActivityNet-1.3mAP IOU@0.958.12VSGN (TSP features)
Action LocalizationTHUMOS’14Avg mAP (0.3:0.7)50.2VSGN
Action LocalizationTHUMOS’14mAP IOU@0.366.7VSGN
Action LocalizationTHUMOS’14mAP IOU@0.460.4VSGN
Action LocalizationTHUMOS’14mAP IOU@0.552.4VSGN
Action LocalizationTHUMOS’14mAP IOU@0.641VSGN
Action LocalizationTHUMOS’14mAP IOU@0.730.4VSGN

Related Papers

DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Zero-Shot Temporal Interaction Localization for Egocentric Videos2025-06-04A Review on Coarse to Fine-Grained Animal Action Recognition2025-06-01LLM-powered Query Expansion for Enhancing Boundary Prediction in Language-driven Action Localization2025-05-30CLIP-AE: CLIP-assisted Cross-view Audio-Visual Enhancement for Unsupervised Temporal Action Localization2025-05-29DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition2025-05-27ProTAL: A Drag-and-Link Video Programming Framework for Temporal Action Localization2025-05-23