TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Skeleton-based Action Recognition via Spatial and Temporal...

Skeleton-based Action Recognition via Spatial and Temporal Transformer Networks

Chiara Plizzari, Marco Cannici, Matteo Matteucci

2020-08-17Skeleton Based Action RecognitionHuman Activity RecognitionAction RecognitionAction Recognition In VideosActivity Recognition
PaperPDFCode(official)

Abstract

Skeleton-based Human Activity Recognition has achieved great interest in recent years as skeleton data has demonstrated being robust to illumination changes, body scales, dynamic camera views, and complex background. In particular, Spatial-Temporal Graph Convolutional Networks (ST-GCN) demonstrated to be effective in learning both spatial and temporal dependencies on non-Euclidean data such as skeleton graphs. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem, especially when it comes to extracting effective information from joint motion patterns and their correlations. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self-Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two-stream network, whose performance is evaluated on three large-scale datasets, NTU-RGB+D 60, NTU-RGB+D 120, and Kinetics Skeleton 400, consistently improving backbone results. Compared with methods that use the same input data, the proposed ST-TR achieves state-of-the-art performance on all datasets when using joints' coordinates as input, and results on-par with state-of-the-art when adding bones information.

Results

TaskDatasetMetricValueModel
VideoKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
VideoNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
VideoNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
Temporal Action LocalizationKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
Temporal Action LocalizationNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
Temporal Action LocalizationNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
Zero-Shot LearningKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
Zero-Shot LearningNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
Zero-Shot LearningNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
Activity RecognitionKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
Activity RecognitionNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
Activity RecognitionNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
Action LocalizationKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
Action LocalizationNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
Action LocalizationNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
Action DetectionKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
Action DetectionNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
Action DetectionNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
3D Action RecognitionKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
3D Action RecognitionNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
3D Action RecognitionNTU RGB+DAccuracy (CV)96.1ST-TR-agcn
Action RecognitionKinetics-Skeleton datasetAccuracy37.4ST-TR-agcn
Action RecognitionNTU RGB+DAccuracy (CS)89.9ST-TR-agcn
Action RecognitionNTU RGB+DAccuracy (CV)96.1ST-TR-agcn

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs2025-07-15Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition Network2025-06-25Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23