TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Pose Refinement Graph Convolutional Network for Skeleton-b...

Pose Refinement Graph Convolutional Network for Skeleton-based Action Recognition

Shijie Li, Jinhui Yi, Yazan Abu Farha, Juergen Gall

2020-10-14Skeleton Based Action RecognitionAction Recognition
PaperPDF

Abstract

With the advances in capturing 2D or 3D skeleton data, skeleton-based action recognition has received an increasing interest over the last years. As skeleton data is commonly represented by graphs, graph convolutional networks have been proposed for this task. While current graph convolutional networks accurately recognize actions, they are too expensive for robotics applications where limited computational resources are available. In this paper, we therefore propose a highly efficient graph convolutional network that addresses the limitations of previous works. This is achieved by a parallel structure that gradually fuses motion and spatial information and by reducing the temporal resolution as early as possible. Furthermore, we explicitly address the issue that human poses can contain errors. To this end, the network first refines the poses before they are further processed to recognize the action. We therefore call the network Pose Refinement Graph Convolutional Network. Compared to other graph convolutional networks, our network requires 86\%-93\% less parameters and reduces the floating point operations by 89%-96% while achieving a comparable accuracy. It therefore provides a much better trade-off between accuracy, memory footprint and processing time, which makes it suitable for robotics applications.

Results

TaskDatasetMetricValueModel
VideoKinetics-Skeleton datasetAccuracy33.7PR-GCN
VideoNTU RGB+DAccuracy (CS)85.2PR-GCN
VideoNTU RGB+DAccuracy (CV)91.7PR-GCN
Temporal Action LocalizationKinetics-Skeleton datasetAccuracy33.7PR-GCN
Temporal Action LocalizationNTU RGB+DAccuracy (CS)85.2PR-GCN
Temporal Action LocalizationNTU RGB+DAccuracy (CV)91.7PR-GCN
Zero-Shot LearningKinetics-Skeleton datasetAccuracy33.7PR-GCN
Zero-Shot LearningNTU RGB+DAccuracy (CS)85.2PR-GCN
Zero-Shot LearningNTU RGB+DAccuracy (CV)91.7PR-GCN
Activity RecognitionKinetics-Skeleton datasetAccuracy33.7PR-GCN
Activity RecognitionNTU RGB+DAccuracy (CS)85.2PR-GCN
Activity RecognitionNTU RGB+DAccuracy (CV)91.7PR-GCN
Action LocalizationKinetics-Skeleton datasetAccuracy33.7PR-GCN
Action LocalizationNTU RGB+DAccuracy (CS)85.2PR-GCN
Action LocalizationNTU RGB+DAccuracy (CV)91.7PR-GCN
Action DetectionKinetics-Skeleton datasetAccuracy33.7PR-GCN
Action DetectionNTU RGB+DAccuracy (CS)85.2PR-GCN
Action DetectionNTU RGB+DAccuracy (CV)91.7PR-GCN
3D Action RecognitionKinetics-Skeleton datasetAccuracy33.7PR-GCN
3D Action RecognitionNTU RGB+DAccuracy (CS)85.2PR-GCN
3D Action RecognitionNTU RGB+DAccuracy (CV)91.7PR-GCN
Action RecognitionKinetics-Skeleton datasetAccuracy33.7PR-GCN
Action RecognitionNTU RGB+DAccuracy (CS)85.2PR-GCN
Action RecognitionNTU RGB+DAccuracy (CV)91.7PR-GCN

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22Active Multimodal Distillation for Few-shot Action Recognition2025-06-16