TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Actor-Context-Actor Relation Network for Spatio-Temporal A...

Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization

Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, Hongsheng Li

2020-06-14CVPR 2021 1Action DetectionAction LocalizationSpatio-Temporal Action LocalizationVideo UnderstandingAction RecognitionTemporal Action Localization
PaperPDFCodeCode(official)Code

Abstract

Localizing persons and recognizing their actions from videos is a challenging task towards high-level video understanding. Recent advances have been achieved by modeling direct pairwise relations between entities. In this paper, we take one step further, not only model direct relations between pairs but also take into account indirect higher-order relations established upon multiple elements. We propose to explicitly model the Actor-Context-Actor Relation, which is the relation between two actors based on their interactions with the context. To this end, we design an Actor-Context-Actor Relation Network (ACAR-Net) which builds upon a novel High-order Relation Reasoning Operator and an Actor-Context Feature Bank to enable indirect relation reasoning for spatio-temporal action localization. Experiments on AVA and UCF101-24 datasets show the advantages of modeling actor-context-actor relations, and visualization of attention maps further verifies that our model is capable of finding relevant higher-order relations to support action detection. Notably, our method ranks first in the AVA-Kineticsaction localization task of ActivityNet Challenge 2020, out-performing other entries by a significant margin (+6.71mAP). Training code and models will be available at https://github.com/Siyu-C/ACAR-Net.

Results

TaskDatasetMetricValueModel
Activity RecognitionAVA v2.1mAP (Val)30ACAR-Net, SlowFast R-101 (Kinetics-400 pretraining)
Activity RecognitionAVA v2.2mAP31.72ACAR-Net, SlowFast R-101 (Kinetics-700 pretraining)
Action LocalizationAVA-Kineticstest mAP39.62ACAR (multi-scale, ensemble)
Action LocalizationAVA-Kineticsval mAP40.49ACAR (multi-scale, ensemble)
Action LocalizationAVA-Kineticsval mAP36.36ACAR (multi-scale, R-101, 8 × 8)
Action RecognitionAVA v2.1mAP (Val)30ACAR-Net, SlowFast R-101 (Kinetics-400 pretraining)
Action RecognitionAVA v2.2mAP31.72ACAR-Net, SlowFast R-101 (Kinetics-700 pretraining)

Related Papers

VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks2025-07-15EmbRACE-3K: Embodied Reasoning and Action in Complex Environments2025-07-14Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI2025-07-14Beyond Appearance: Geometric Cues for Robust Video Instance Segmentation2025-07-08Omni-Video: Democratizing Unified Video Understanding and Generation2025-07-08