TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/NT-Xent

NT-Xent

Normalized Temperature-scaled Cross Entropy Loss

GeneralIntroduced 2000251 papers
Source Paper

Description

NT-Xent, or Normalized Temperature-scaled Cross Entropy Loss, is a loss function. Let sim(u,v)=uTv/∣∣u∣∣∣∣v∣∣\text{sim}\left(\mathbf{u}, \mathbf{v}\right) = \mathbf{u}^{T}\mathbf{v}/||\mathbf{u}|| ||\mathbf{v}||sim(u,v)=uTv/∣∣u∣∣∣∣v∣∣ denote the cosine similarity between two vectors u\mathbf{u}u and v\mathbf{v}v. Then the loss function for a positive pair of examples (i,j)\left(i, j\right)(i,j) is :

l_i,j=−log⁡exp⁡(sim(z_i,z_j)/τ)∑2N_k=11_[k≠i]exp⁡(sim(z_i,z_k)/τ) \mathbb{l}\_{i,j} = -\log\frac{\exp\left(\text{sim}\left(\mathbf{z}\_{i}, \mathbf{z}\_{j}\right)/\tau\right)}{\sum^{2N}\_{k=1}\mathcal{1}\_{[k\neq{i}]}\exp\left(\text{sim}\left(\mathbf{z}\_{i}, \mathbf{z}\_{k}\right)/\tau\right)}l_i,j=−log∑2N_k=11_[k=i]exp(sim(z_i,z_k)/τ)exp(sim(z_i,z_j)/τ)​

where 1_[k≠i]∈\mathcal{1}\_{[k\neq{i}]} \in 1_[k=i]∈{0,10, 10,1} is an indicator function evaluating to 111 iff k≠ik\neq{i}k=i and τ\tauτ denotes a temperature parameter. The final loss is computed across all positive pairs, both (i,j)\left(i, j\right)(i,j) and (j,i)\left(j, i\right)(j,i), in a mini-batch.

Source: SimCLR

Papers Using This Method

Low-resource keyword spotting using contrastively trained transformer acoustic word embeddings2025-06-21Probabilistic Variational Contrastive Learning2025-06-11scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell Data2025-06-10Circumventing Backdoor Space via Weight Symmetry2025-06-09SSPS: Self-Supervised Positive Sampling for Robust Self-Supervised Speaker Verification2025-05-20Representation Learning via Non-Contrastive Mutual Information2025-04-23Impact of Language Guidance: A Reproducibility Study2025-04-10Unsupervised Detection of Fraudulent Transactions in E-commerce Using Contrastive Learning2025-03-24A Statistical Theory of Contrastive Learning via Approximate Sufficient Statistics2025-03-21SinSim: Sinkhorn-Regularized SimCLR2025-02-13Dataset Ownership Verification in Contrastive Pre-trained Models2025-02-11Self-Supervised Frameworks for Speaker Verification via Bootstrapped Positive Sampling2025-01-29Bridging Contrastive Learning and Domain Adaptation: Theoretical Perspective and Practical Application2025-01-28Learning Compact and Robust Representations for Anomaly Detection2025-01-09Enhancing Contrastive Learning Inspired by the Philosophy of "The Blind Men and the Elephant"2024-12-21Self-Supervised Radiograph Anatomical Region Classification -- How Clean Is Your Real-World Data?2024-12-20Maximising Histopathology Segmentation using Minimal Labels via Self-Supervision2024-12-19AmCLR: Unified Augmented Learning for Cross-Modal Representations2024-12-10Mitigating Instance-Dependent Label Noise: Integrating Self-Supervised Pretraining with Pseudo-Label Refinement2024-12-06Tight PAC-Bayesian Risk Certificates for Contrastive Learning2024-12-04