TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Masked Motion Predictors are Strong 3D Action Representati...

Masked Motion Predictors are Strong 3D Action Representation Learners

Yunyao Mao, Jiajun Deng, Wengang Zhou, Yao Fang, Wanli Ouyang, Houqiang Li

2023-08-14ICCV 2023 1Self-supervised Skeleton-based Action Recognition3D Action RecognitionSkeleton Based Action Recognitionmotion predictionAction RecognitionTemporal Action LocalizationFew-Shot Skeleton-Based Action Recognition
PaperPDFCode(official)

Abstract

In 3D human action recognition, limited supervised data makes it challenging to fully tap into the modeling potential of powerful networks such as transformers. As a result, researchers have been actively investigating effective self-supervised pre-training strategies. In this work, we show that instead of following the prevalent pretext task to perform masked self-component reconstruction in human joints, explicit contextual motion modeling is key to the success of learning effective feature representation for 3D action recognition. Formally, we propose the Masked Motion Prediction (MAMP) framework. To be specific, the proposed MAMP takes as input the masked spatio-temporal skeleton sequence and predicts the corresponding temporal motion of the masked human joints. Considering the high temporal redundancy of the skeleton sequence, in our MAMP, the motion information also acts as an empirical semantic richness prior that guide the masking process, promoting better attention to semantically rich temporal regions. Extensive experiments on NTU-60, NTU-120, and PKU-MMD datasets show that the proposed MAMP pre-training substantially improves the performance of the adopted vanilla transformer, achieving state-of-the-art results without bells and whistles. The source code of our MAMP is available at https://github.com/maoyunyao/MAMP.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
VideoNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
VideoNTU RGB+DAccuracy (CS)93.1MAMP
VideoNTU RGB+DAccuracy (CV)97.5MAMP
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
Temporal Action LocalizationNTU RGB+DAccuracy (CS)93.1MAMP
Temporal Action LocalizationNTU RGB+DAccuracy (CV)97.5MAMP
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
Zero-Shot LearningNTU RGB+DAccuracy (CS)93.1MAMP
Zero-Shot LearningNTU RGB+DAccuracy (CV)97.5MAMP
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
Activity RecognitionNTU RGB+DAccuracy (CS)93.1MAMP
Activity RecognitionNTU RGB+DAccuracy (CV)97.5MAMP
Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
Action LocalizationNTU RGB+DAccuracy (CS)93.1MAMP
Action LocalizationNTU RGB+DAccuracy (CV)97.5MAMP
Action DetectionNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
Action DetectionNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
Action DetectionNTU RGB+DAccuracy (CS)93.1MAMP
Action DetectionNTU RGB+DAccuracy (CV)97.5MAMP
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
3D Action RecognitionNTU RGB+DAccuracy (CS)93.1MAMP
3D Action RecognitionNTU RGB+DAccuracy (CV)97.5MAMP
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.3MAMP
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90MAMP
Action RecognitionNTU RGB+DAccuracy (CS)93.1MAMP
Action RecognitionNTU RGB+DAccuracy (CV)97.5MAMP

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Stochastic Human Motion Prediction with Memory of Action Transition and Action Characteristic2025-07-05Temporal Continual Learning with Prior Compensation for Human Motion Prediction2025-07-05Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25