TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-supervised Learning is More Robust to Dataset Imbalance

Self-supervised Learning is More Robust to Dataset Imbalance

Hong Liu, Jeff Z. HaoChen, Adrien Gaidon, Tengyu Ma

2021-10-11ICLR 2022 4Long-tail LearningSelf-Supervised Learning
PaperPDFCode

Abstract

Self-supervised learning (SSL) is a scalable way to learn general visual representations since it learns without labels. However, large-scale unlabeled datasets in the wild often have long-tailed label distributions, where we know little about the behavior of SSL. In this work, we systematically investigate self-supervised learning under dataset imbalance. First, we find out via extensive experiments that off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations. The performance gap between balanced and imbalanced pre-training with SSL is significantly smaller than the gap with supervised learning, across sample sizes, for both in-domain and, especially, out-of-domain evaluation. Second, towards understanding the robustness of SSL, we hypothesize that SSL learns richer features from frequent data: it may learn label-irrelevant-but-transferable features that help classify the rare classes and downstream tasks. In contrast, supervised learning has no incentive to learn features irrelevant to the labels from frequent examples. We validate this hypothesis with semi-synthetic experiments and theoretical analyses on a simplified setting. Third, inspired by the theoretical insights, we devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets with several evaluation criteria, closing the small gap between balanced and imbalanced datasets with the same number of examples.

Results

TaskDatasetMetricValueModel
Image ClassificationCIFAR-10-LT (ρ=100)Error Rate14.4SimSiam+rwSAM
Few-Shot Image ClassificationCIFAR-10-LT (ρ=100)Error Rate14.4SimSiam+rwSAM
Generalized Few-Shot ClassificationCIFAR-10-LT (ρ=100)Error Rate14.4SimSiam+rwSAM
Long-tail LearningCIFAR-10-LT (ρ=100)Error Rate14.4SimSiam+rwSAM
Generalized Few-Shot LearningCIFAR-10-LT (ρ=100)Error Rate14.4SimSiam+rwSAM

Related Papers

A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder2025-07-14Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08World4Drive: End-to-End Autonomous Driving via Intention-aware Physical Latent World Model2025-07-01ShapeEmbed: a self-supervised learning framework for 2D contour quantification2025-07-01RetFiner: A Vision-Language Refinement Scheme for Retinal Foundation Models2025-06-27Boosting Generative Adversarial Transferability with Self-supervised Vision Transformer Features2025-06-26Hybrid Deep Learning and Signal Processing for Arabic Dialect Recognition in Low-Resource Settings2025-06-26