TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Attribute Surrogates Learning and Spectral Tokens Pooling ...

Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

Yangji He, Weihan Liang, Dongyang Zhao, Hong-Yu Zhou, Weifeng Ge, Yizhou Yu, Wenqiang Zhang

2022-03-17CVPR 2022 1Few-Shot LearningAttributeSelf-Supervised LearningFew-Shot Image Classification
PaperPDFCode(official)

Abstract

This paper presents new hierarchically cascaded transformers that can improve data efficiency through attribute surrogates learning and spectral tokens pooling. Vision transformers have recently been thought of as a promising alternative to convolutional neural networks for visual recognition. But when there is no sufficient data, it gets stuck in overfitting and shows inferior performance. To improve data efficiency, we propose hierarchically cascaded transformers that exploit intrinsic image structures through spectral tokens pooling and optimize the learnable parameters through latent attribute surrogates. The intrinsic image structure is utilized to reduce the ambiguity between foreground content and background noise by spectral tokens pooling. And the attribute surrogate learning scheme is designed to benefit from the rich visual information in image-label pairs instead of simple visual concepts assigned by their labels. Our Hierarchically Cascaded Transformers, called HCTransformers, is built upon a self-supervised learning framework DINO and is tested on several popular few-shot learning benchmarks. In the inductive setting, HCTransformers surpass the DINO baseline by a large margin of 9.7% 5-way 1-shot accuracy and 9.17% 5-way 5-shot accuracy on miniImageNet, which demonstrates HCTransformers are efficient to extract discriminative features. Also, HCTransformers show clear advantages over SOTA few-shot classification methods in both 5-way 1-shot and 5-way 5-shot settings on four popular benchmark datasets, including miniImageNet, tieredImageNet, FC100, and CIFAR-FS. The trained weights and codes are available at https://github.com/StomachCold/HCTransformers.

Results

TaskDatasetMetricValueModel
Few-Shot LearningMini-Imagenet 5-way (1-shot)5 way 1~2 shot74.74HCTransformers
Few-Shot LearningMini-ImageNet - 1-Shot LearningAcc74.74HCTransformers
Image ClassificationCIFAR-FS 5-way (1-shot)Accuracy78.89HCTransformers
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy89.19HCTransformers
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy74.74HCTransformers
Image ClassificationFC100 5-way (5-shot)Accuracy66.42HCTransformers
Image ClassificationFC100 5-way (1-shot)Accuracy48.27HCTransformers
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy79.67HCTransformers
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy91.72HCTransformers
Image ClassificationCIFAR-FS 5-way (5-shot)Accuracy90.5HCTransformers
Meta-LearningMini-Imagenet 5-way (1-shot)5 way 1~2 shot74.74HCTransformers
Meta-LearningMini-ImageNet - 1-Shot LearningAcc74.74HCTransformers
Few-Shot Image ClassificationCIFAR-FS 5-way (1-shot)Accuracy78.89HCTransformers
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy89.19HCTransformers
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy74.74HCTransformers
Few-Shot Image ClassificationFC100 5-way (5-shot)Accuracy66.42HCTransformers
Few-Shot Image ClassificationFC100 5-way (1-shot)Accuracy48.27HCTransformers
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy79.67HCTransformers
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy91.72HCTransformers
Few-Shot Image ClassificationCIFAR-FS 5-way (5-shot)Accuracy90.5HCTransformers

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder2025-07-14Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models2025-07-13