TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An End-to-End Spatio-Temporal Attention Model for Human Ac...

An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data

Sijie Song, Cuiling Lan, Junliang Xing, Wen-Jun Zeng, Jiaying Liu

2016-11-18Skeleton Based Action RecognitionAction RecognitionTemporal Action Localization
PaperPDF

Abstract

Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model,both on the small human action recognition data set of SBU and the currently largest NTU dataset.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+DAccuracy (CS)73.4STA-LSTM
VideoNTU RGB+DAccuracy (CV)81.2STA-LSTM
Temporal Action LocalizationNTU RGB+DAccuracy (CS)73.4STA-LSTM
Temporal Action LocalizationNTU RGB+DAccuracy (CV)81.2STA-LSTM
Zero-Shot LearningNTU RGB+DAccuracy (CS)73.4STA-LSTM
Zero-Shot LearningNTU RGB+DAccuracy (CV)81.2STA-LSTM
Activity RecognitionNTU RGB+DAccuracy (CS)73.4STA-LSTM
Activity RecognitionNTU RGB+DAccuracy (CV)81.2STA-LSTM
Action LocalizationNTU RGB+DAccuracy (CS)73.4STA-LSTM
Action LocalizationNTU RGB+DAccuracy (CV)81.2STA-LSTM
Action DetectionNTU RGB+DAccuracy (CS)73.4STA-LSTM
Action DetectionNTU RGB+DAccuracy (CV)81.2STA-LSTM
3D Action RecognitionNTU RGB+DAccuracy (CS)73.4STA-LSTM
3D Action RecognitionNTU RGB+DAccuracy (CV)81.2STA-LSTM
Action RecognitionNTU RGB+DAccuracy (CS)73.4STA-LSTM
Action RecognitionNTU RGB+DAccuracy (CV)81.2STA-LSTM

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22