Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata
Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space. As labeled images are expensive, one direction is to augment the dataset by generating either images or image features. However, the former misses fine-grained details and the latter requires learning a mapping associated with class embeddings. In this work, we take feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by modality-specific aligned variational autoencoders. This leaves us with the required discriminative information about the image and classes in the latent features, on which we train a softmax classifier. The key to our approach is that we align the distributions learned from images and from side-information to construct latent features that contain the essential multi-modal information associated with unseen classes. We evaluate our learned latent features on several benchmark datasets, i.e. CUB, SUN, AWA1 and AWA2, and establish a new state of the art on generalized zero-shot as well as on few-shot learning. Moreover, our results on ImageNet with various zero-shot splits show that our latent features generalize well in large-scale settings.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| Video | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| Video | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| Video | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| Video | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| Video | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| Video | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| Temporal Action Localization | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| Temporal Action Localization | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| Temporal Action Localization | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| Temporal Action Localization | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| Temporal Action Localization | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| Temporal Action Localization | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| Zero-Shot Learning | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| Zero-Shot Learning | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| Zero-Shot Learning | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| Zero-Shot Learning | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| Zero-Shot Learning | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| Zero-Shot Learning | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| Activity Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| Activity Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| Activity Recognition | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| Activity Recognition | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| Activity Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| Activity Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| Activity Recognition | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| Action Localization | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| Action Localization | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| Action Localization | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| Action Localization | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| Action Localization | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| Action Localization | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| Action Localization | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| 3D Action Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| 3D Action Recognition | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| 3D Action Recognition | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| 3D Action Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| 3D Action Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| 3D Action Recognition | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| Action Recognition | NTU RGB+D 120 | Accuracy (10 unseen classes) | 59.53 | CADA-VAE |
| Action Recognition | NTU RGB+D 120 | Accuracy (24 unseen classes) | 35.77 | CADA-VAE |
| Action Recognition | NTU RGB+D 120 | Random Split Accuracy | 45.14 | CADA-VAE |
| Action Recognition | PKU-MMD | Random Split Accuracy | 60.74 | CADA-VAE |
| Action Recognition | NTU RGB+D | Accuracy (12 unseen classes) | 28.96 | CADA-VAE |
| Action Recognition | NTU RGB+D | Accuracy (5 unseen classes) | 76.84 | CADA-VAE |
| Action Recognition | NTU RGB+D | Random Split Accuracy | 60.74 | CADA-VAE |
| Generalized Few-Shot Learning | AwA2 | Per-Class Accuracy (1-shot) | 69.6 | CADA-VAE |
| Generalized Few-Shot Learning | AwA2 | Per-Class Accuracy (10-shots) | 80.2 | CADA-VAE |
| Generalized Few-Shot Learning | AwA2 | Per-Class Accuracy (2-shots) | 73.7 | CADA-VAE |
| Generalized Few-Shot Learning | AwA2 | Per-Class Accuracy (20-shots) | 80.9 | CADA-VAE |
| Generalized Few-Shot Learning | AwA2 | Per-Class Accuracy (5-shots) | 78.1 | CADA-VAE |