TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dynamic Few-Shot Visual Learning without Forgetting

Dynamic Few-Shot Visual Learning without Forgetting

Spyros Gidaris, Nikos Komodakis

2018-04-25CVPR 2018 6Few-Shot LearningObject RecognitionFew-Shot Image ClassificationOne-Shot LearningGeneral Classification
PaperPDFCodeCodeCodeCode(official)

Abstract

The human visual system has the remarkably ability to be able to effortlessly learn novel concepts from only a few examples. Mimicking the same behavior on machine learning vision systems is an interesting and very challenging research problem with many practical advantages on real world vision applications. In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories). To achieve that goal we propose (a) to extend an object recognition system with an attention based few-shot classification weight generator, and (b) to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. The latter, apart from unifying the recognition of both novel and base categories, it also leads to feature representations that generalize better on "unseen" categories. We extensively evaluate our approach on Mini-ImageNet where we manage to improve the prior state-of-the-art on few-shot recognition (i.e., we achieve 56.20% and 73.00% on the 1-shot and 5-shot settings respectively) while at the same time we do not sacrifice any accuracy on the base categories, which is a characteristic that most prior approaches lack. Finally, we apply our approach on the recently introduced few-shot benchmark of Bharath and Girshick [4] where we also achieve state-of-the-art results. The code and models of our paper will be published on: https://github.com/gidariss/FewShotWithoutForgetting

Results

TaskDatasetMetricValueModel
Image ClassificationImageNet (1-shot)Top-5 Accuracy58.2Dynamic FSL
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy72.81Cosine similarity function + C64F feature extractor
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy56.2Cosine similarity function + C64F feature extractor
Few-Shot Image ClassificationImageNet (1-shot)Top-5 Accuracy58.2Dynamic FSL
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy72.81Cosine similarity function + C64F feature extractor
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy56.2Cosine similarity function + C64F feature extractor

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark Evaluation2025-07-12Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis2025-07-10Few-Shot Learning by Explicit Physics Integration: An Application to Groundwater Heat Transport2025-07-08GeoMag: A Vision-Language Model for Pixel-level Fine-Grained Remote Sensing Image Parsing2025-07-08ViRefSAM: Visual Reference-Guided Segment Anything Model for Remote Sensing Segmentation2025-07-03Out-of-distribution detection in 3D applications: a review2025-07-01