TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-Attention Network for Skeleton-based Human Action Rec...

Self-Attention Network for Skeleton-based Human Action Recognition

Sangwoo Cho, Muhammad Hasan Maqbool, Fei Liu, Hassan Foroosh

2019-12-18Skeleton Based Action RecognitionAction RecognitionTemporal Action Localization
PaperPDF

Abstract

Skeleton-based action recognition has recently attracted a lot of attention. Researchers are coming up with new approaches for extracting spatio-temporal relations and making considerable progress on large-scale skeleton-based datasets. Most of the architectures being proposed are based upon recurrent neural networks (RNNs), convolutional neural networks (CNNs) and graph-based CNNs. When it comes to skeleton-based action recognition, the importance of long term contextual information is central which is not captured by the current architectures. In order to come up with a better representation and capturing of long term spatio-temporal relationships, we propose three variants of Self-Attention Network (SAN), namely, SAN-V1, SAN-V2 and SAN-V3. Our SAN variants has the impressive capability of extracting high-level semantics by capturing long-range correlations. We have also integrated the Temporal Segment Network (TSN) with our SAN variants which resulted in improved overall performance. Different configurations of Self-Attention Network (SAN) variants and Temporal Segment Network (TSN) are explored with extensive experiments. Our chosen configuration outperforms state-of-the-art Top-1 and Top-5 by 4.4% and 7.9% respectively on Kinetics and shows consistently better performance than state-of-the-art methods on NTU RGB+D.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+DAccuracy (CS)87.2TS-SAN
VideoNTU RGB+DAccuracy (CV)92.7TS-SAN
Temporal Action LocalizationNTU RGB+DAccuracy (CS)87.2TS-SAN
Temporal Action LocalizationNTU RGB+DAccuracy (CV)92.7TS-SAN
Zero-Shot LearningNTU RGB+DAccuracy (CS)87.2TS-SAN
Zero-Shot LearningNTU RGB+DAccuracy (CV)92.7TS-SAN
Activity RecognitionNTU RGB+DAccuracy (CS)87.2TS-SAN
Activity RecognitionNTU RGB+DAccuracy (CV)92.7TS-SAN
Action LocalizationNTU RGB+DAccuracy (CS)87.2TS-SAN
Action LocalizationNTU RGB+DAccuracy (CV)92.7TS-SAN
Action DetectionNTU RGB+DAccuracy (CS)87.2TS-SAN
Action DetectionNTU RGB+DAccuracy (CV)92.7TS-SAN
3D Action RecognitionNTU RGB+DAccuracy (CS)87.2TS-SAN
3D Action RecognitionNTU RGB+DAccuracy (CV)92.7TS-SAN
Action RecognitionNTU RGB+DAccuracy (CS)87.2TS-SAN
Action RecognitionNTU RGB+DAccuracy (CV)92.7TS-SAN

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22