TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ESPT: A Self-Supervised Episodic Spatial Pretext Task for ...

ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving Few-Shot Learning

Yi Rong, Xiongbo Lu, Zhaoyang Sun, Yaxiong Chen, Shengwu Xiong

2023-04-26Few-Shot LearningImage ClassificationSelf-Supervised LearningFew-Shot Image Classification
PaperPDFCode(official)

Abstract

Self-supervised learning (SSL) techniques have recently been integrated into the few-shot learning (FSL) framework and have shown promising results in improving the few-shot image classification performance. However, existing SSL approaches used in FSL typically seek the supervision signals from the global embedding of every single image. Therefore, during the episodic training of FSL, these methods cannot capture and fully utilize the local visual information in image samples and the data structure information of the whole episode, which are beneficial to FSL. To this end, we propose to augment the few-shot learning objective with a novel self-supervised Episodic Spatial Pretext Task (ESPT). Specifically, for each few-shot episode, we generate its corresponding transformed episode by applying a random geometric transformation to all the images in it. Based on these, our ESPT objective is defined as maximizing the local spatial relationship consistency between the original episode and the transformed one. With this definition, the ESPT-augmented FSL objective promotes learning more transferable feature representations that capture the local spatial features of different images and their inter-relational structural information in each input episode, thus enabling the model to generalize better to new categories with only a few samples. Extensive experiments indicate that our ESPT method achieves new state-of-the-art performance for few-shot image classification on three mainstay benchmark datasets. The source code will be available at: https://github.com/Whut-YiRong/ESPT.

Results

TaskDatasetMetricValueModel
Image ClassificationCUB 200 5-way 5-shotAccuracy94.02ESPT
Image ClassificationCUB 200 5-way 1-shotAccuracy85.45ESPT
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy84.11ESPT
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy68.36ESPT
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy72.68ESPT
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy87.49ESPT
Few-Shot Image ClassificationCUB 200 5-way 5-shotAccuracy94.02ESPT
Few-Shot Image ClassificationCUB 200 5-way 1-shotAccuracy85.45ESPT
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy84.11ESPT
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy68.36ESPT
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy72.68ESPT
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy87.49ESPT

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15