Kai Zhou, Shuhai Zhang, Zeng You, Jinwu Hu, Mingkui Tan, Fei Liu
Zero-shot skeleton-based action recognition aims to classify unseen skeleton-based human actions without prior exposure to such categories during training. This task is extremely challenging due to the difficulty in generalizing from known to unknown actions. Previous studies typically use two-stage training: pre-training skeleton encoders on seen action categories using cross-entropy loss and then aligning pre-extracted skeleton and text features, enabling knowledge transfer to unseen classes through skeleton-text alignment and language models' generalization. However, their efficacy is hindered by 1) insufficient discrimination for skeleton features, as the fixed skeleton encoder fails to capture necessary alignment information for effective skeleton-text alignment; 2) the neglect of alignment bias between skeleton and unseen text features during testing. To this end, we propose a prototype-guided feature alignment paradigm for zero-shot skeleton-based action recognition, termed PGFA. Specifically, we develop an end-to-end cross-modal contrastive training framework to improve skeleton-text alignment, ensuring sufficient discrimination for skeleton features. Additionally, we introduce a prototype-guided text feature alignment strategy to mitigate the adverse impact of the distribution discrepancy during testing. We provide a theoretical analysis to support our prototype-guided text feature alignment strategy and empirically evaluate our overall PGFA on three well-known datasets. Compared with the top competitor SMIE method, our PGFA achieves absolute accuracy improvements of 22.96%, 12.53%, and 18.54% on the NTU-60, NTU-120, and PKU-MMD datasets, respectively.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| Video | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| Video | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| Video | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| Video | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| Video | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| Video | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| Temporal Action Localization | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| Temporal Action Localization | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| Temporal Action Localization | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| Temporal Action Localization | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| Temporal Action Localization | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| Zero-Shot Learning | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| Zero-Shot Learning | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| Zero-Shot Learning | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| Zero-Shot Learning | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| Zero-Shot Learning | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |
| Activity Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| Activity Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| Activity Recognition | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| Activity Recognition | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| Activity Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| Activity Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| Activity Recognition | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |
| Action Localization | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| Action Localization | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| Action Localization | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| Action Localization | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| Action Localization | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| Action Localization | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| Action Localization | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| 3D Action Recognition | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| 3D Action Recognition | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| 3D Action Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| 3D Action Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| 3D Action Recognition | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |
| Action Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 79.99 | PGFA |
| Action Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 59.42 | PGFA |
| Action Recognition | NTU RGB+D 120 | Random Split Accuracy | 71.38 | PGFA |
| Action Recognition | PKU-MMD | Random Split Accuracy | 87.8 | PGFA |
| Action Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 55.99 | PGFA |
| Action Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 80.26 | PGFA |
| Action Recognition | NTU RGB+D | Random Split Accuracy | 93.17 | PGFA |