MoCo v2

GeneralIntroduced 200030 papers

Description

MoCo v2 is an improved version of the Momentum Contrast self-supervised learning algorithm. Motivated by the findings presented in the SimCLR paper, authors:

  • Replace the 1-layer fully connected layer with a 2-layer MLP head with ReLU for the unsupervised training stage.
  • Include blur augmentation.
  • Use cosine learning rate schedule.

These modifications enable MoCo to outperform the state-of-the-art SimCLR with a smaller batch size and fewer epochs.

Papers Using This Method

Enhancing Contrastive Learning Inspired by the Philosophy of "The Blind Men and the Elephant"2024-12-21Contrastive Learning for Image Complexity Representation2024-08-06Overcoming Dimensional Collapse in Self-supervised Contrastive Learning for Medical Image Segmentation2024-02-22SASSL: Enhancing Self-Supervised Learning via Neural Style Transfer2023-12-02Matrix Information Theory for Self-Supervised Learning2023-05-27MoBYv2AL: Self-supervised Active Learning for Image Classification2023-01-04Revisiting the Critical Factors of Augmentation-Invariant Representation Learning2022-07-30Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo2022-03-30DATA: Domain-Aware and Task-Aware Self-supervised Learning2022-03-17InsCon:Instance Consistency Feature Representation via Self-Supervised Learning2022-03-15Measuring Self-Supervised Representation Quality for Downstream Classification using Discriminative Features2022-03-03Energy-Based Contrastive Learning of Visual Representations2022-02-10Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning2021-11-26RegionCL: Can Simple Region Swapping Contribute to Contrastive Learning?2021-11-248-bit Optimizers via Block-wise Quantization2021-10-06Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning2021-09-29Self-Supervised Visual Representations Learning by Contrastive Mask Prediction2021-08-18Self-Supervised Learning with Swin Transformers2021-05-10Jigsaw Clustering for Unsupervised Visual Representation Learning2021-04-01Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors2021-03-08