Siavash Khodadadeh, Sharare Zehtabian, Saeed Vahidian, Weijia Wang, Bill Lin, Ladislau Bölöni
Unsupervised meta-learning approaches rely on synthetic meta-tasks that are created using techniques such as random selection, clustering and/or augmentation. Unfortunately, clustering and augmentation are domain-dependent, and thus they require either manual tweaking or expensive learning. In this work, we describe an approach that generates meta-tasks using generative models. A critical component is a novel approach of sampling from the latent space that generates objects grouped into synthetic classes forming the training and validation data of a meta-task. We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines on few-shot classification tasks on the most widely used benchmark datasets. In addition, the approach promises to be applicable without manual tweaking over a wider range of domains than previous approaches.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Classification | Mini-Imagenet 5-way (1-shot) | Accuracy | 40.05 | LASIUM |
| Image Classification | Mini-Imagenet 5-way (5-shot) | Accuracy | 54.56 | LASIUM |
| Few-Shot Image Classification | Mini-Imagenet 5-way (1-shot) | Accuracy | 40.05 | LASIUM |
| Few-Shot Image Classification | Mini-Imagenet 5-way (5-shot) | Accuracy | 54.56 | LASIUM |