Maosen Li, Siheng Chen, Xu Chen, Ya zhang, Yan-Feng Wang, Qi Tian
Action recognition with skeleton data has recently attracted much attention in computer vision. Previous studies are mostly based on fixed skeleton graphs, only capturing local physical dependencies among joints, which may miss implicit joint correlations. To capture richer dependencies, we introduce an encoder-decoder structure, called A-link inference module, to capture action-specific latent dependencies, i.e. actional links, directly from actions. We also extend the existing skeleton graphs to represent higher-order dependencies, i.e. structural links. Combing the two types of links into a generalized skeleton graph, we further propose the actional-structural graph convolution network (AS-GCN), which stacks actional-structural graph convolution and temporal convolution as a basic building block, to learn both spatial and temporal features for action recognition. A future pose prediction head is added in parallel to the recognition head to help capture more detailed action patterns through self-supervision. We validate AS-GCN in action recognition using two skeleton data sets, NTU-RGB+D and Kinetics. The proposed AS-GCN achieves consistently large improvement compared to the state-of-the-art methods. As a side product, AS-GCN also shows promising results for future pose prediction.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Video | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Video | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| Temporal Action Localization | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Temporal Action Localization | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Temporal Action Localization | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| Zero-Shot Learning | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| Activity Recognition | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Activity Recognition | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Activity Recognition | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| Action Localization | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Action Localization | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Action Localization | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| Action Detection | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Action Detection | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Action Detection | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| 3D Action Recognition | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| 3D Action Recognition | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| 3D Action Recognition | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |
| Action Recognition | Kinetics-Skeleton dataset | Accuracy | 34.8 | AS-GCN |
| Action Recognition | NTU RGB+D | Accuracy (CS) | 86.8 | AS-GCN |
| Action Recognition | NTU RGB+D | Accuracy (CV) | 94.2 | AS-GCN |