Anqi Zhu, Qiuhong Ke, Mingming Gong, James Bailey
While remarkable progress has been made on supervised skeleton-based action recognition, the challenge of zero-shot recognition remains relatively unexplored. In this paper, we argue that relying solely on aligning label-level semantics and global skeleton features is insufficient to effectively transfer locally consistent visual knowledge from seen to unseen classes. To address this limitation, we introduce Part-aware Unified Representation between Language and Skeleton (PURLS) to explore visual-semantic alignment at both local and global scales. PURLS introduces a new prompting module and a novel partitioning module to generate aligned textual and visual representations across different levels. The former leverages a pre-trained GPT-3 to infer refined descriptions of the global and local (body-part-based and temporal-interval-based) movements from the original action labels. The latter employs an adaptive sampling strategy to group visual features from all body joint movements that are semantically relevant to a given description. Our approach is evaluated on various skeleton/language backbones and three large-scale datasets, i.e., NTU-RGB+D 60, NTU-RGB+D 120, and a newly curated dataset Kinetics-skeleton 200. The results showcase the universality and superior performance of PURLS, surpassing prior skeleton-based solutions and standard baselines from other domains. The source codes can be accessed at https://github.com/azzh1/PURLS.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| Video | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| Video | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| Video | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| Temporal Action Localization | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| Temporal Action Localization | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| Zero-Shot Learning | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| Zero-Shot Learning | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |
| Activity Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| Activity Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| Activity Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| Activity Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |
| Action Localization | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| Action Localization | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| Action Localization | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| Action Localization | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| 3D Action Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| 3D Action Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |
| Action Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 71.95 | PURLS |
| Action Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 52.01 | PURLS |
| Action Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 40.99 | PURLS |
| Action Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 79.23 | PURLS |