TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Realistic Evaluation of Transductive Few-Shot Learning

Realistic Evaluation of Transductive Few-Shot Learning

Olivier Veilleux, Malik Boudiaf, Pablo Piantanida, Ismail Ben Ayed

2022-04-24NeurIPS 2021 12Few-Shot Learning
PaperPDFCode(official)

Abstract

Transductive inference is widely used in few-shot learning, as it leverages the statistics of the unlabeled query set of a few-shot task, typically yielding substantially better performances than its inductive counterpart. The current few-shot benchmarks use perfectly class-balanced tasks at inference. We argue that such an artificial regularity is unrealistic, as it assumes that the marginal label probability of the testing samples is known and fixed to the uniform distribution. In fact, in realistic scenarios, the unlabeled query sets come with arbitrary and unknown label marginals. We introduce and study the effect of arbitrary class distributions within the query sets of few-shot tasks at inference, removing the class-balance artefact. Specifically, we model the marginal probabilities of the classes as Dirichlet-distributed random variables, which yields a principled and realistic sampling within the simplex. This leverages the current few-shot benchmarks, building testing tasks with arbitrary class distributions. We evaluate experimentally state-of-the-art transductive methods over 3 widely used data sets, and observe, surprisingly, substantial performance drops, even below inductive methods in some cases. Furthermore, we propose a generalization of the mutual-information loss, based on $\alpha$-divergences, which can handle effectively class-distribution variations. Empirically, we show that our transductive $\alpha$-divergence optimization outperforms state-of-the-art methods across several data sets, models and few-shot settings. Our code is publicly available at https://github.com/oveilleux/Realistic_Transductive_Few_Shot.

Results

TaskDatasetMetricValueModel
Image ClassificationDirichlet Mini-Imagenet (5-way, 5-shot)1:1 Accuracy82.5\alpha-TIM
Image ClassificationDirichlet Tiered-Imagenet (5-way, 5-shot)1:1 Accuracy86.6\alpha-TIM
Image ClassificationDirichlet CUB-200 (5-way, 1-shot)1:1 Accuracy75.7\alpha-TIM
Image ClassificationDirichlet Mini-Imagenet (5-way, 1-shot)1:1 Accuracy67.4\alpha-TIM
Image ClassificationDirichlet CUB-200 (5-way, 5-shot)1:1 Accuracy89.8\alpha-TIM
Image ClassificationDirichlet Tiered-Imagenet (5-way, 1-shot)1:1 Accuracy74.4\alpha-TIM
Few-Shot Image ClassificationDirichlet Mini-Imagenet (5-way, 5-shot)1:1 Accuracy82.5\alpha-TIM
Few-Shot Image ClassificationDirichlet Tiered-Imagenet (5-way, 5-shot)1:1 Accuracy86.6\alpha-TIM
Few-Shot Image ClassificationDirichlet CUB-200 (5-way, 1-shot)1:1 Accuracy75.7\alpha-TIM
Few-Shot Image ClassificationDirichlet Mini-Imagenet (5-way, 1-shot)1:1 Accuracy67.4\alpha-TIM
Few-Shot Image ClassificationDirichlet CUB-200 (5-way, 5-shot)1:1 Accuracy89.8\alpha-TIM
Few-Shot Image ClassificationDirichlet Tiered-Imagenet (5-way, 1-shot)1:1 Accuracy74.4\alpha-TIM

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis2025-07-10Few-Shot Learning by Explicit Physics Integration: An Application to Groundwater Heat Transport2025-07-08ViRefSAM: Visual Reference-Guided Segment Anything Model for Remote Sensing Segmentation2025-07-03Dynamic Context-Aware Prompt Recommendation for Domain-Specific AI Applications2025-06-25Ancient Script Image Recognition and Processing: A Review2025-06-24Universal Music Representations? Evaluating Foundation Models on World Music Corpora2025-06-20