Amir Shahroudy, Jun Liu, Tian-Tsong Ng, Gang Wang
Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art hand-crafted features on the suggested cross-subject and cross-view evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Video | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Video | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Video | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| Temporal Action Localization | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Temporal Action Localization | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Temporal Action Localization | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Temporal Action Localization | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| Activity Recognition | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Activity Recognition | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Activity Recognition | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Activity Recognition | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| Action Localization | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Action Localization | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Action Localization | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Action Localization | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| Action Detection | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Action Detection | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Action Detection | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Action Detection | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| 3D Action Recognition | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| 3D Action Recognition | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| 3D Action Recognition | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| 3D Action Recognition | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |
| Action Recognition | NTU RGB+D | Accuracy (CS) | 62.93 | Part-aware LSTM |
| Action Recognition | NTU RGB+D | Accuracy (CV) | 70.27 | Part-aware LSTM |
| Action Recognition | NTU RGB+D | Accuracy (CS) | 60.7 | Deep LSTM |
| Action Recognition | NTU RGB+D | Accuracy (CV) | 67.3 | Deep LSTM |