TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-Attention Message Passing for Contrastive Few-Shot Le...

Self-Attention Message Passing for Contrastive Few-Shot Learning

Ojas Kishorkumar Shirekar, Anuj Singh, Hadi Jamali-Rad

2022-10-12Few-Shot LearningUnsupervised Few-Shot Image ClassificationContrastive Learning
PaperPDFCode(official)

Abstract

Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision. Deep learning models, however, require an abundance of data and supervision to perform at a satisfactory level. Unsupervised few-shot learning (U-FSL) is the pursuit of bridging this gap between machines and humans. Inspired by the capacity of graph neural networks (GNNs) in discovering complex inter-sample relationships, we propose a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training. We also propose an optimal transport (OT) based fine-tuning strategy (we call OpT-Tune) to efficiently induce task awareness into our novel end-to-end unsupervised few-shot classification framework (SAMPTransfer). Our extensive experimental results corroborate the efficacy of SAMPTransfer in a variety of downstream few-shot classification scenarios, setting a new state-of-the-art for U-FSL on both miniImagenet and tieredImagenet benchmarks, offering up to 7%+ and 5%+ improvements, respectively. Our further investigations also confirm that SAMPTransfer remains on-par with some supervised baselines on miniImagenet and outperforms all existing U-FSL baselines in a challenging cross-domain scenario. Our code can be found in our GitHub repository at https://github.com/ojss/SAMPTransfer/.

Results

TaskDatasetMetricValueModel
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy65.19SAMPTransfer (Conv4)
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy61.02SAMPTransfer (Conv4)
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy49.1SAMPTransfer (Conv4)
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy72.52SAMPTransfer (Conv4)
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy65.19SAMPTransfer (Conv4)
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy61.02SAMPTransfer (Conv4)
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy49.1SAMPTransfer (Conv4)
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy72.52SAMPTransfer (Conv4)

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15