TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-scale Adaptive Task Attention Network for Few-Shot L...

Multi-scale Adaptive Task Attention Network for Few-Shot Learning

Haoxing Chen, Huaxiong Li, Yaohui Li, Chunlin Chen

2020-11-30Few-Shot LearningMetric LearningFew-Shot Image Classification
PaperPDF

Abstract

The goal of few-shot learning is to classify unseen categories with few labeled samples. Recently, the low-level information metric-learning based methods have achieved satisfying performance, since local representations (LRs) are more consistent between seen and unseen classes. However, most of these methods deal with each category in the support set independently, which is not sufficient to measure the relation between features, especially in a certain task. Moreover, the low-level information-based metric learning method suffers when dominant objects of different scales exist in a complex background. To address these issues, this paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning. Specifically, we first use a multi-scale feature generator to generate multiple features at different scales. Then, an adaptive task attention module is proposed to select the most important LRs among the entire task. Afterwards, a similarity-to-class module and a fusion layer are utilized to calculate a joint multi-scale similarity between the query image and the support set. Extensive experiments on popular benchmarks clearly show the effectiveness of the proposed MATANet compared with state-of-the-art methods.

Results

TaskDatasetMetricValueModel
Image ClassificationStanford Dogs 5-way (1-shot)Accuracy55.63MATANet
Image ClassificationCUB-200-2011 5-way (5-shot)Accuracy83.92MATANet
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy72.67MATANet
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy53.63MATANet
Image ClassificationStanford Dogs 5-way (5-shot)Accuracy70.29MATANet
Image ClassificationStanford Cars 5-way (5-shot)Accuracy91.89MATANet
Image ClassificationStanford Cars 5-way (1-shot)Accuracy73.15MATANet
Image ClassificationCUB-200-2011 5-way (1-shot)Accuracy67.33MATANet
Few-Shot Image ClassificationStanford Dogs 5-way (1-shot)Accuracy55.63MATANet
Few-Shot Image ClassificationCUB-200-2011 5-way (5-shot)Accuracy83.92MATANet
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy72.67MATANet
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy53.63MATANet
Few-Shot Image ClassificationStanford Dogs 5-way (5-shot)Accuracy70.29MATANet
Few-Shot Image ClassificationStanford Cars 5-way (5-shot)Accuracy91.89MATANet
Few-Shot Image ClassificationStanford Cars 5-way (1-shot)Accuracy73.15MATANet
Few-Shot Image ClassificationCUB-200-2011 5-way (1-shot)Accuracy67.33MATANet

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Unsupervised Ground Metric Learning2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark Evaluation2025-07-12$\texttt{Droid}$: A Resource Suite for AI-Generated Code Detection2025-07-11Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis2025-07-10Few-Shot Learning by Explicit Physics Integration: An Application to Groundwater Heat Transport2025-07-08