Wangmeng Xiang, Chao Li, Yuxuan Zhou, Biao Wang, Lei Zhang
Skeleton-based action recognition has recently received considerable attention. Current approaches to skeleton-based action recognition are typically formulated as one-hot classification tasks and do not fully exploit the semantic relations between actions. For example, "make victory sign" and "thumb up" are two actions of hand gestures, whose major difference lies in the movement of hands. This information is agnostic from the categorical one-hot encoding of action classes but could be unveiled from the action description. Therefore, utilizing action description in training could potentially benefit representation learning. In this work, we propose a Generative Action-description Prompts (GAP) approach for skeleton-based action recognition. More specifically, we employ a pre-trained large-scale language model as the knowledge engine to automatically generate text descriptions for body parts movements of actions, and propose a multi-modal training scheme by utilizing the text encoder to generate feature vectors for different body parts and supervise the skeleton encoder for action representation learning. Experiments show that our proposed GAP method achieves noticeable improvements over various baseline models without extra computation cost at inference. GAP achieves new state-of-the-arts on popular skeleton-based action recognition benchmarks, including NTU RGB+D, NTU RGB+D 120 and NW-UCLA. The source code is available at https://github.com/MartinXM/GAP.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Video | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Video | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Video | N-UCLA | Accuracy | 97.2 | LST |
| Video | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Video | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Video | NTU RGB+D | Ensembled Modalities | 4 | LST |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Temporal Action Localization | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Temporal Action Localization | N-UCLA | Accuracy | 97.2 | LST |
| Temporal Action Localization | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Temporal Action Localization | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Temporal Action Localization | NTU RGB+D | Ensembled Modalities | 4 | LST |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Zero-Shot Learning | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Zero-Shot Learning | N-UCLA | Accuracy | 97.2 | LST |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Zero-Shot Learning | NTU RGB+D | Ensembled Modalities | 4 | LST |
| Activity Recognition | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Activity Recognition | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Activity Recognition | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Activity Recognition | N-UCLA | Accuracy | 97.2 | LST |
| Activity Recognition | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Activity Recognition | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Activity Recognition | NTU RGB+D | Ensembled Modalities | 4 | LST |
| Action Localization | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Action Localization | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Action Localization | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Action Localization | N-UCLA | Accuracy | 97.2 | LST |
| Action Localization | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Action Localization | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Action Localization | NTU RGB+D | Ensembled Modalities | 4 | LST |
| Action Detection | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Action Detection | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Action Detection | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Action Detection | N-UCLA | Accuracy | 97.2 | LST |
| Action Detection | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Action Detection | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Action Detection | NTU RGB+D | Ensembled Modalities | 4 | LST |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| 3D Action Recognition | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| 3D Action Recognition | N-UCLA | Accuracy | 97.2 | LST |
| 3D Action Recognition | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| 3D Action Recognition | NTU RGB+D | Accuracy (CV) | 97 | LST |
| 3D Action Recognition | NTU RGB+D | Ensembled Modalities | 4 | LST |
| Action Recognition | NTU RGB+D 120 | Accuracy (Cross-Setup) | 91.1 | LST |
| Action Recognition | NTU RGB+D 120 | Accuracy (Cross-Subject) | 89.9 | LST |
| Action Recognition | NTU RGB+D 120 | Ensembled Modalities | 4 | LST |
| Action Recognition | N-UCLA | Accuracy | 97.2 | LST |
| Action Recognition | NTU RGB+D | Accuracy (CS) | 92.9 | LST |
| Action Recognition | NTU RGB+D | Accuracy (CV) | 97 | LST |
| Action Recognition | NTU RGB+D | Ensembled Modalities | 4 | LST |