TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-Supervised Learning by Estimating Twin Class Distribu...

Self-Supervised Learning by Estimating Twin Class Distributions

Feng Wang, Tao Kong, Rufeng Zhang, Huaping Liu, Hang Li

2021-10-14Self-Supervised Image ClassificationImage ClassificationRepresentation LearningSelf-Supervised LearningTransfer LearningUnsupervised Image ClassificationFine-Grained Image ClassificationSemi-Supervised Image Classification
PaperPDFCodeCode(official)

Abstract

We present TWIST, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. However, simply minimizing the divergence between augmentations will cause collapsed solutions, i.e., outputting the same class probability distribution for all images. In this case, no information about the input image is left. To solve this problem, we propose to maximize the mutual information between the input and the class predictions. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction for each sample assertive and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, TWIST can naturally avoid the collapsed solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. As a result, TWIST outperforms state-of-the-art methods on a wide range of tasks. Especially, TWIST performs surprisingly well on semi-supervised learning, achieving 61.2% top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone, surpassing previous best results by an absolute improvement of 6.2%. Codes and pre-trained models are given on: https://github.com/bytedance/TWIST

Results

TaskDatasetMetricValueModel
Image ClassificationDTDAccuracy76.6TWIST (ResNet-50)
Image ClassificationOxford-IIIT Pet DatasetAccuracy94.5TWIST (ResNet-50)
Image ClassificationFood-101Accuracy (%)89.3TWIST (ResNet-50)
Image ClassificationSUN397Accuracy67.4TWIST (ResNet-50)
Image ClassificationImageNetARI30TWIST (ResNet-50)
Image ClassificationImageNetAccuracy (%)40.6TWIST (ResNet-50)
Fine-Grained Image ClassificationSUN397Accuracy67.4TWIST (ResNet-50)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17