TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Fine-to-Coarse Convolutional Neural Network for 3D Human...

A Fine-to-Coarse Convolutional Neural Network for 3D Human Action Recognition

Thao Minh Le, Nakamasa Inoue, Koichi Shinoda

2018-05-303D Action RecognitionSkeleton Based Action RecognitionAction RecognitionTemporal Action Localization
PaperPDF

Abstract

This paper presents a new framework for human action recognition from a 3D skeleton sequence. Previous studies do not fully utilize the temporal relationships between video segments in a human action. Some studies successfully used very deep Convolutional Neural Network (CNN) models but often suffer from the data insufficiency problem. In this study, we first segment a skeleton sequence into distinct temporal segments in order to exploit the correlations between them. The temporal and spatial features of a skeleton sequence are then extracted simultaneously by utilizing a fine-to-coarse (F2C) CNN architecture optimized for human skeleton sequences. We evaluate our proposed method on NTU RGB+D and SBU Kinect Interaction dataset. It achieves 79.6% and 84.6% of accuracies on NTU RGB+D with cross-object and cross-view protocol, respectively, which are almost identical with the state-of-the-art performance. In addition, our method significantly improves the accuracy of the actions in two-person interactions.

Results

TaskDatasetMetricValueModel
VideoNTU RGB+DAccuracy (CS)79.6F2CSkeleton
VideoNTU RGB+DAccuracy (CV)84.6F2CSkeleton
Temporal Action LocalizationNTU RGB+DAccuracy (CS)79.6F2CSkeleton
Temporal Action LocalizationNTU RGB+DAccuracy (CV)84.6F2CSkeleton
Zero-Shot LearningNTU RGB+DAccuracy (CS)79.6F2CSkeleton
Zero-Shot LearningNTU RGB+DAccuracy (CV)84.6F2CSkeleton
Activity RecognitionNTU RGB+DAccuracy (CS)79.6F2CSkeleton
Activity RecognitionNTU RGB+DAccuracy (CV)84.6F2CSkeleton
Action LocalizationNTU RGB+DAccuracy (CS)79.6F2CSkeleton
Action LocalizationNTU RGB+DAccuracy (CV)84.6F2CSkeleton
Action DetectionNTU RGB+DAccuracy (CS)79.6F2CSkeleton
Action DetectionNTU RGB+DAccuracy (CV)84.6F2CSkeleton
3D Action RecognitionNTU RGB+DAccuracy (CS)79.6F2CSkeleton
3D Action RecognitionNTU RGB+DAccuracy (CV)84.6F2CSkeleton
Action RecognitionNTU RGB+DAccuracy (CS)79.6F2CSkeleton
Action RecognitionNTU RGB+DAccuracy (CV)84.6F2CSkeleton

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Zero-shot Skeleton-based Action Recognition with Prototype-guided Feature Alignment2025-07-01EgoAdapt: Adaptive Multisensory Distillation and Policy Learning for Efficient Egocentric Perception2025-06-26Feature Hallucination for Self-supervised Action Recognition2025-06-25CARMA: Context-Aware Situational Grounding of Human-Robot Group Interactions by Combining Vision-Language Models with Object and Action Recognition2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Adapting Vision-Language Models for Evaluating World Models2025-06-22