Description
Dense Contrastive Learning is a self-supervised learning method for dense prediction tasks. It implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Contrasting with regular contrastive loss, the contrastive loss is computed between the single feature vectors outputted by the global projection head, at the level of global feature, while the dense contrastive loss is computed between the dense feature vectors outputted by the dense projection head, at the level of local feature.
Papers Using This Method
An Asymmetric Augmented Self-Supervised Learning Method for Unsupervised Fine-Grained Image Hashing2024-01-01Fine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning2023-09-01Correlation between Alignment-Uniformity and Performance of Dense Contrastive Representations2022-10-17Pixel-level Correspondence for Self-Supervised Learning from Video2022-07-08Contrastive Learning of Features between Images and LiDAR2022-06-24Cross-Patch Dense Contrastive Learning for Semi-Supervised Segmentation of Cellular Nuclei in Histopathologic Images2022-01-01Dense Contrastive Visual-Linguistic Pretraining2021-09-24Contrastive Language-Image Pre-training for the Italian Language2021-08-19Dense Contrastive Learning for Self-Supervised Visual Pre-Training2020-11-18