TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Skeleton-Based Action Recognition with Synchronous Local a...

Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention

Guyue Hu, Bo Cui, Shan Yu

2018-11-10Skeleton Based Action RecognitionAction RecognitionTemporal Action Localization
PaperPDF

Abstract

Benefiting from its succinctness and robustness, skeleton-based action recognition has recently attracted much attention. Most existing methods utilize local networks (e.g., recurrent, convolutional, and graph convolutional networks) to extract spatio-temporal dynamics hierarchically. As a consequence, the local and non-local dependencies, which contain more details and semantics respectively, are asynchronously captured in different level of layers. Moreover, existing methods are limited to the spatio-temporal domain and ignore information in the frequency domain. To better extract synchronous detailed and semantic information from multi-domains, we propose a residual frequency attention (rFA) block to focus on discriminative patterns in the frequency domain, and a synchronous local and non-local (SLnL) block to simultaneously capture the details and semantics in the spatio-temporal domain. Besides, a soft-margin focal loss (SMFL) is proposed to optimize the learning whole process, which automatically conducts data selection and encourages intrinsic margins in classifiers. Our approach significantly outperforms other state-of-the-art methods on several large-scale datasets.

Results

TaskDatasetMetricValueModel
VideoKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
VideoNTU RGB+DAccuracy (CS)89.1SLnL-rFA
VideoNTU RGB+DAccuracy (CV)94.9SLnL-rFA
Temporal Action LocalizationKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
Temporal Action LocalizationNTU RGB+DAccuracy (CS)89.1SLnL-rFA
Temporal Action LocalizationNTU RGB+DAccuracy (CV)94.9SLnL-rFA
Zero-Shot LearningKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
Zero-Shot LearningNTU RGB+DAccuracy (CS)89.1SLnL-rFA
Zero-Shot LearningNTU RGB+DAccuracy (CV)94.9SLnL-rFA
Activity RecognitionKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
Activity RecognitionNTU RGB+DAccuracy (CS)89.1SLnL-rFA
Activity RecognitionNTU RGB+DAccuracy (CV)94.9SLnL-rFA
Action LocalizationKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
Action LocalizationNTU RGB+DAccuracy (CS)89.1SLnL-rFA
Action LocalizationNTU RGB+DAccuracy (CV)94.9SLnL-rFA
Action DetectionKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
Action DetectionNTU RGB+DAccuracy (CS)89.1SLnL-rFA
Action DetectionNTU RGB+DAccuracy (CV)94.9SLnL-rFA
3D Action RecognitionKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
3D Action RecognitionNTU RGB+DAccuracy (CS)89.1SLnL-rFA
3D Action RecognitionNTU RGB+DAccuracy (CV)94.9SLnL-rFA
Action RecognitionKinetics-Skeleton datasetAccuracy36.6SLnL-rFA
Action RecognitionNTU RGB+DAccuracy (CS)89.1SLnL-rFA
Action RecognitionNTU RGB+DAccuracy (CV)94.9SLnL-rFA

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22