TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Integrating Human Parsing and Pose Network for Human Actio...

Integrating Human Parsing and Pose Network for Human Action Recognition

Runwei Ding, Yuhang Wen, Jinfu Liu, Nan Dai, Fanyang Meng, Mengyuan Liu

2023-07-16Human ParsingAction Recognition
PaperPDFCode(official)

Abstract

Human skeletons and RGB sequences are both widely-adopted input modalities for human action recognition. However, skeletons lack appearance features and color data suffer large amount of irrelevant depiction. To address this, we introduce human parsing feature map as a novel modality, since it can selectively retain spatiotemporal features of the body parts, while filtering out noises regarding outfits, backgrounds, etc. We propose an Integrating Human Parsing and Pose Network (IPP-Net) for action recognition, which is the first to leverage both skeletons and human parsing feature maps in dual-branch approach. The human pose branch feeds compact skeletal representations of different modalities in graph convolutional network to model pose features. In human parsing branch, multi-frame body-part parsing features are extracted with human detector and parser, which is later learnt using a convolutional backbone. A late ensemble of two branches is adopted to get final predictions, considering both robust keypoints and rich semantic body-part features. Extensive experiments on NTU RGB+D and NTU RGB+D 120 benchmarks consistently verify the effectiveness of the proposed IPP-Net, which outperforms the existing action recognition methods. Our code is publicly available at https://github.com/liujf69/IPP-Net-Parsing .

Results

TaskDatasetMetricValueModel
Activity RecognitionNTU RGB+DAccuracy (CS)93.8IPP-Net (Parsing + Pose)
Activity RecognitionNTU RGB+DAccuracy (CV)97.1IPP-Net (Parsing + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.7IPP-Net (Parsing + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90IPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+DAccuracy (CS)93.8IPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+DAccuracy (CV)97.1IPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)91.7IPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)90IPP-Net (Parsing + Pose)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16