TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Spatio-Temporal LSTM with Trust Gates for 3D Human Action ...

Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition

Jun Liu, Amir Shahroudy, Dong Xu, Gang Wang

2016-07-243D Action RecognitionSkeleton Based Action RecognitionAction RecognitionTemporal Action Localization
PaperPDF

Abstract

3D action recognition - analysis of human actions based on 3D skeleton data - becomes popular recently due to its succinctness, robustness, and view-invariant representation. Recent attempts on this problem suggested to develop RNN-based learning methods to model the contextual dependency in the temporal domain. In this paper, we extend this idea to spatio-temporal domains to analyze the hidden sources of action-related information within the input data over both domains concurrently. Inspired by the graphical structure of the human skeleton, we further propose a more powerful tree-structure based traversal method. To handle the noise and occlusion in 3D skeleton data, we introduce new gating mechanism within LSTM to learn the reliability of the sequential input data and accordingly adjust its effect on updating the long-term context information stored in the memory cell. Our method achieves state-of-the-art performance on 4 challenging benchmark datasets for 3D human action analysis.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
VideoNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
VideoNTU RGB+DAccuracy (CS)61.7ST-LSTM
VideoNTU RGB+DAccuracy (CV)75.5ST-LSTM
Temporal Action LocalizationNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
Temporal Action LocalizationNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
Temporal Action LocalizationNTU RGB+DAccuracy (CS)61.7ST-LSTM
Temporal Action LocalizationNTU RGB+DAccuracy (CV)75.5ST-LSTM
Zero-Shot LearningNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
Zero-Shot LearningNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
Zero-Shot LearningNTU RGB+DAccuracy (CS)61.7ST-LSTM
Zero-Shot LearningNTU RGB+DAccuracy (CV)75.5ST-LSTM
Activity RecognitionNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
Activity RecognitionNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
Activity RecognitionNTU RGB+DAccuracy (CS)61.7ST-LSTM
Activity RecognitionNTU RGB+DAccuracy (CV)75.5ST-LSTM
Action LocalizationNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
Action LocalizationNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
Action LocalizationNTU RGB+DAccuracy (CS)61.7ST-LSTM
Action LocalizationNTU RGB+DAccuracy (CV)75.5ST-LSTM
Action DetectionNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
Action DetectionNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
Action DetectionNTU RGB+DAccuracy (CS)61.7ST-LSTM
Action DetectionNTU RGB+DAccuracy (CV)75.5ST-LSTM
3D Action RecognitionNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
3D Action RecognitionNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
3D Action RecognitionNTU RGB+DAccuracy (CS)61.7ST-LSTM
3D Action RecognitionNTU RGB+DAccuracy (CV)75.5ST-LSTM
Action RecognitionNTU RGB+DAccuracy (CS)69.2Spatio-Temporal LSTM
Action RecognitionNTU RGB+DAccuracy (CV)77.7Spatio-Temporal LSTM
Action RecognitionNTU RGB+DAccuracy (CS)61.7ST-LSTM
Action RecognitionNTU RGB+DAccuracy (CV)75.5ST-LSTM

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22