TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Barlow Twins

Barlow Twins

GeneralIntroduced 200071 papers
Source Paper

Description

Barlow Twins is a self-supervised learning method that applies redundancy-reduction — a principle first proposed in neuroscience — to self supervised learning. The objective function measures the cross-correlation matrix between the embeddings of two identical networks fed with distorted versions of a batch of samples, and tries to make this matrix close to the identity. This causes the embedding vectors of distorted version of a sample to be similar, while minimizing the redundancy between the components of these vectors. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors.

Papers Using This Method

Enhancing User Sequence Modeling through Barlow Twins-based Self-Supervised Learning2025-05-02SinSim: Sinkhorn-Regularized SimCLR2025-02-13Self-supervised Benchmark Lottery on ImageNet: Do Marginal Improvements Translate to Improvements on Similar Datasets?2025-01-26WARLearn: Weather-Adaptive Representation Learning2024-11-21Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization2024-11-20Infinite Width Limits of Self Supervised Neural Networks2024-11-17Efficient Self-Supervised Barlow Twins from Limited Tissue Slide Cohorts for Colonic Pathology Diagnostics2024-11-08A Theoretical Characterization of Optimal Data Augmentations in Self-Supervised Learning2024-11-04Uncovering RL Integration in SSL Loss: Objective-Specific Implications for Data-Efficient RL2024-10-22Self-Supervised Anomaly Detection in the Wild: Favor Joint Embeddings Methods2024-10-05Context-Aware Predictive Coding: A Representation Learning Framework for WiFi Sensing2024-09-20Context-Aware Predictive Coding: A Representation Learning Framework for WiFi Sensing2024-09-16Learning Robust Representations for Communications over Noisy Channels2024-09-02PreMix: Addressing Label Scarcity in Whole Slide Image Classification with Pre-trained Multiple Instance Learning Aggregators2024-08-02Barlow Twins Deep Neural Network for Advanced 1D Drug-Target Interaction Prediction2024-07-31Dense Self-Supervised Learning for Medical Image Segmentation2024-07-29Enhanced Masked Image Modeling to Avoid Model Collapse on Multi-modal MRI Datasets2024-07-15Maximum Manifold Capacity Representations in State Representation Learning2024-05-22From Barlow Twins to Triplet Training: Differentiating Dementia with Limited Data2024-04-09Towards Better Understanding of Contrastive Sentence Representation Learning: A Unified Paradigm for Gradient2024-02-28