TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PSUMNet: Unified Modality Part Streams are All You Need fo...

PSUMNet: Unified Modality Part Streams are All You Need for Efficient Pose-based Action Recognition

Neel Trivedi, Ravi Kiran Sarvadevabhatla

2022-08-11Skeleton Based Action RecognitionAllAction Recognition
PaperPDFCode(official)

Abstract

Pose-based action recognition is predominantly tackled by approaches which treat the input skeleton in a monolithic fashion, i.e. joints in the pose tree are processed as a whole. However, such approaches ignore the fact that action categories are often characterized by localized action dynamics involving only small subsets of part joint groups involving hands (e.g. `Thumbs up') or legs (e.g. `Kicking'). Although part-grouping based approaches exist, each part group is not considered within the global pose frame, causing such methods to fall short. Further, conventional approaches employ independent modality streams (e.g. joint, bone, joint velocity, bone velocity) and train their network multiple times on these streams, which massively increases the number of training parameters. To address these issues, we introduce PSUMNet, a novel approach for scalable and efficient pose-based action recognition. At the representation level, we propose a global frame based part stream approach as opposed to conventional modality based streams. Within each part stream, the associated data from multiple modalities is unified and consumed by the processing pipeline. Experimentally, PSUMNet achieves state of the art performance on the widely used NTURGB+D 60/120 dataset and dense joint skeleton dataset NTU 60-X/120-X. PSUMNet is highly efficient and outperforms competing methods which use 100%-400% more parameters. PSUMNet also generalizes to the SHREC hand gesture dataset with competitive performance. Overall, PSUMNet's scalability, performance and efficiency makes it an attractive choice for action recognition and for deployment on compute-restricted embedded and edge devices. Code and pretrained models can be accessed at https://github.com/skelemoa/psumnet

Results

TaskDatasetMetricValueModel
VideoNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
VideoNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
VideoNTU RGB+DAccuracy (CS)92.9PSUMNet
VideoNTU RGB+DAccuracy (CV)96.7PSUMNet
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
Temporal Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
Temporal Action LocalizationNTU RGB+DAccuracy (CS)92.9PSUMNet
Temporal Action LocalizationNTU RGB+DAccuracy (CV)96.7PSUMNet
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
Zero-Shot LearningNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
Zero-Shot LearningNTU RGB+DAccuracy (CS)92.9PSUMNet
Zero-Shot LearningNTU RGB+DAccuracy (CV)96.7PSUMNet
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
Activity RecognitionNTU RGB+DAccuracy (CS)92.9PSUMNet
Activity RecognitionNTU RGB+DAccuracy (CV)96.7PSUMNet
Action LocalizationNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
Action LocalizationNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
Action LocalizationNTU RGB+DAccuracy (CS)92.9PSUMNet
Action LocalizationNTU RGB+DAccuracy (CV)96.7PSUMNet
Action DetectionNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
Action DetectionNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
Action DetectionNTU RGB+DAccuracy (CS)92.9PSUMNet
Action DetectionNTU RGB+DAccuracy (CV)96.7PSUMNet
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
3D Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
3D Action RecognitionNTU RGB+DAccuracy (CS)92.9PSUMNet
3D Action RecognitionNTU RGB+DAccuracy (CV)96.7PSUMNet
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)90.6PSUMNet
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)89.4PSUMNet
Action RecognitionNTU RGB+DAccuracy (CS)92.9PSUMNet
Action RecognitionNTU RGB+DAccuracy (CV)96.7PSUMNet

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Modeling Code: Is Text All You Need?2025-07-15All Eyes, no IMU: Learning Flight Attitude from Vision Alone2025-07-15Is Diversity All You Need for Scalable Robotic Manipulation?2025-07-08DESIGN AND IMPLEMENTATION OF ONLINE CLEARANCE REPORT.2025-07-07Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models2025-07-03Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01Prompt2SegCXR:Prompt to Segment All Organs and Diseases in Chest X-rays2025-07-01