TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Assume, Augment and Learn: Unsupervised Few-Shot Meta-Lear...

Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation

Antreas Antoniou, Amos Storkey

2019-02-26Few-Shot LearningMeta-LearningUnsupervised Few-Shot Image ClassificationData Augmentation
PaperPDF

Abstract

The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet.

Results

TaskDatasetMetricValueModel
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy37.67AAL
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy49.18AAL
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy37.67AAL
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy49.18AAL

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Imbalanced Regression Pipeline Recommendation2025-07-16CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Mixture of Experts in Large Language Models2025-07-15