TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-Supervision Can Be a Good Few-Shot Learner

Self-Supervision Can Be a Good Few-Shot Learner

Yuning Lu, Liangjian Wen, Jianzhuang Liu, Yajing Liu, Xinmei Tian

2022-07-19Few-Shot LearningUnsupervised Few-Shot Image ClassificationFew-Shot Image Classificationcross-domain few-shot learning
PaperPDFCode(official)CodeCodeCode

Abstract

Existing few-shot learning (FSL) methods rely on training with a large labeled dataset, which prevents them from leveraging abundant unlabeled data. From an information-theoretic perspective, we propose an effective unsupervised FSL method, learning representations with self-supervision. Following the InfoMax principle, our method learns comprehensive representations by capturing the intrinsic structure of the data. Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training. Rather than supervised pre-training focusing on the discriminable features of the seen classes, our self-supervised model has less bias toward the seen classes, resulting in better generalization for unseen classes. We explain that supervised pre-training and self-supervised pre-training are actually maximizing different MI objectives. Extensive experiments are further conducted to analyze their FSL performance with various training settings. Surprisingly, the results show that self-supervised pre-training can outperform supervised pre-training under the appropriate conditions. Compared with state-of-the-art FSL methods, our approach achieves comparable performance on widely used FSL benchmarks without any labels of the base classes.

Results

TaskDatasetMetricValueModel
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy83.4UniSiam
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy65.55UniSiam
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy69.6UniSiam
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy86.51UniSiam
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy86.51UniSiam
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy65.55UniSiam
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy69.6UniSiam
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy83.4UniSiam
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy83.4UniSiam
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy65.55UniSiam
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy69.6UniSiam
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy86.51UniSiam
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy86.51UniSiam
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy65.55UniSiam
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy69.6UniSiam
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy83.4UniSiam

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark Evaluation2025-07-12Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis2025-07-10Few-Shot Learning by Explicit Physics Integration: An Application to Groundwater Heat Transport2025-07-08ViRefSAM: Visual Reference-Guided Segment Anything Model for Remote Sensing Segmentation2025-07-03Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications2025-06-25Ancient Script Image Recognition and Processing: A Review2025-06-24