TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/With a Little Help from My Friends: Nearest-Neighbor Contr...

With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations

Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman

2021-04-29ICCV 2021 10Self-Supervised Image ClassificationImage ClassificationSelf-Supervised LearningTransfer LearningContrastive LearningFine-Grained Image ClassificationSemi-Supervised Image Classification
PaperPDFCodeCodeCodeCode

Abstract

Self-supervised learning algorithms based on instance discrimination train encoders to be invariant to pre-defined transformations of the same instance. While most methods treat different views of the same image as positives for a contrastive loss, we are interested in using positives from other instances in the dataset. Our method, Nearest-Neighbor Contrastive Learning of visual Representations (NNCLR), samples the nearest neighbors from the dataset in the latent space, and treats them as positives. This provides more semantic variations than pre-defined transformations. We find that using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification, from 71.7% to 75.6%, outperforming previous state-of-the-art methods. On semi-supervised learning benchmarks we improve performance significantly when only 1% ImageNet labels are available, from 53.8% to 56.5%. On transfer learning benchmarks our method outperforms state-of-the-art methods (including supervised learning with ImageNet) on 8 out of 12 downstream datasets. Furthermore, we demonstrate empirically that our method is less reliant on complex data augmentations. We see a relative reduction of only 2.1% ImageNet Top-1 accuracy when we train using only random crops.

Results

TaskDatasetMetricValueModel
Image ClassificationStanford CarsAccuracy67.1NNCLR
Image ClassificationDTDAccuracy75.5NNCLR
Image ClassificationCIFAR-10Percentage correct93.7NNCLR
Image ClassificationOxford-IIIT Pet DatasetAccuracy91.8NNCLR
Image ClassificationFlowers-102Accuracy95.1NNCLR
Image ClassificationPASCAL VOC 2007Accuracy83NNCLR
Image ClassificationCIFAR-100Percentage correct79NNCLR
Image ClassificationFood-101Accuracy (%)76.7NNCLR
Image ClassificationFGVC AircraftAccuracy64.1NNCLR
Image ClassificationSUN397Accuracy62.5NNCLR
Image ClassificationImageNet - 10% labeled dataTop 5 Accuracy89.3NNCLR (ResNet-50)
Image ClassificationImageNet - 1% labeled dataTop 5 Accuracy80.7NNCLR (ResNet-50)
Image ClassificationImageNetTop 5 Accuracy92.4NNCLR (ResNet-50, multi-crop)
Fine-Grained Image ClassificationFGVC AircraftAccuracy64.1NNCLR
Fine-Grained Image ClassificationSUN397Accuracy62.5NNCLR
Semi-Supervised Image ClassificationImageNet - 10% labeled dataTop 5 Accuracy89.3NNCLR (ResNet-50)
Semi-Supervised Image ClassificationImageNet - 1% labeled dataTop 5 Accuracy80.7NNCLR (ResNet-50)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17