TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Explore Human Parsing Modality for Action Recognition

Explore Human Parsing Modality for Action Recognition

Jinfu Liu, Runwei Ding, Yuhang Wen, Nan Dai, Fanyang Meng, Shen Zhao, Mengyuan Liu

2024-01-04CAAI Transactions on Intelligence Technology 2023 7Human ParsingAction Recognition
PaperPDFCode(official)

Abstract

Multimodal-based action recognition methods have achieved high success using pose and RGB modality. However, skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitations. To address this, we introduce human parsing feature map as a novel modality, since it can selectively retain effective semantic features of the body parts, while filtering out most irrelevant noise. We propose a new dual-branch framework called Ensemble Human Parsing and Pose Network (EPP-Net), which is the first to leverage both skeletons and human parsing modalities for action recognition. The first human pose branch feeds robust skeletons in graph convolutional network to model pose features, while the second human parsing branch also leverages depictive parsing feature maps to model parsing festures via convolutional backbones. The two high-level features will be effectively combined through a late fusion strategy for better action recognition. Extensive experiments on NTU RGB+D and NTU RGB+D 120 benchmarks consistently verify the effectiveness of our proposed EPP-Net, which outperforms the existing action recognition methods. Our code is available at: https://github.com/liujf69/EPP-Net-Action.

Results

TaskDatasetMetricValueModel
Activity RecognitionNTU RGB+DAccuracy (CS)94.7EPP-Net (Parsing + Pose)
Activity RecognitionNTU RGB+DAccuracy (CV)97.7EPP-Net (Parsing + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Setup)92.8EPP-Net (Parsing + Pose)
Activity RecognitionNTU RGB+D 120Accuracy (Cross-Subject)91.1EPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+DAccuracy (CS)94.7EPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+DAccuracy (CV)97.7EPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Setup)92.8EPP-Net (Parsing + Pose)
Action RecognitionNTU RGB+D 120Accuracy (Cross-Subject)91.1EPP-Net (Parsing + Pose)

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16