TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BECLR: Batch Enhanced Contrastive Few-Shot Learning

BECLR: Batch Enhanced Contrastive Few-Shot Learning

Stylianos Poulakakis-Daktylidis, Hadi Jamali-Rad

2024-02-04ICLR 2024 1Few-Shot LearningRepresentation LearningUnsupervised Few-Shot Image ClassificationFew-Shot Image ClassificationContrastive Learning
PaperPDFCode(official)

Abstract

Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning. Unsupervised few-shot learning (U-FSL) aspires to bridge this gap by discarding the reliance on annotations at training time. Intrigued by the success of contrastive learning approaches in the realm of U-FSL, we structurally approach their shortcomings in both pretraining and downstream inference stages. We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space for enhancing positive sampling at the pretraining phase and infusing implicit class-level insights into unsupervised contrastive learning. We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage. We propose an iterative Optimal Transport-based distribution Alignment (OpTA) strategy and demonstrate that it efficiently addresses the problem, especially in low-shot scenarios where FSL approaches suffer the most from sample bias. We later on discuss that DyCE and OpTA are two intertwined pieces of a novel end-to-end approach (we coin as BECLR), constructively magnifying each other's impact. We then present a suite of extensive quantitative and qualitative experimentation to corroborate that BECLR sets a new state-of-the-art across ALL existing U-FSL benchmarks (to the best of our knowledge), and significantly outperforms the best of the current baselines (codebase available at: https://github.com/stypoumic/BECLR).

Results

TaskDatasetMetricValueModel
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy87.86BECLR
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy80.57BECLR
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy81.69BECLR
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy87.82BECLR
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy87.86BECLR
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy80.57BECLR
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy81.69BECLR
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy87.82BECLR

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17