TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Leveraging Hidden Positives for Unsupervised Semantic Segm...

Leveraging Hidden Positives for Unsupervised Semantic Segmentation

Hyun Seok Seong, WonJun Moon, SuBeen Lee, Jae-Pil Heo

2023-03-27CVPR 2023 1Unsupervised Semantic SegmentationSemantic SegmentationContrastive Learning
PaperPDFCode(official)

Abstract

Dramatic demand for manpower to label pixel-level annotations triggered the advent of unsupervised semantic segmentation. Although the recent work employing the vision transformer (ViT) backbone shows exceptional performance, there is still a lack of consideration for task-specific training guidance and local semantic consistency. To tackle these issues, we leverage contrastive learning by excavating hidden positives to learn rich semantic relationships and ensure semantic consistency in local regions. Specifically, we first discover two types of global hidden positives, task-agnostic and task-specific ones for each anchor based on the feature similarities defined by a fixed pre-trained backbone and a segmentation head-in-training, respectively. A gradual increase in the contribution of the latter induces the model to capture task-specific semantic features. In addition, we introduce a gradient propagation strategy to learn semantic consistency between adjacent patches, under the inherent premise that nearby patches are highly likely to possess the same semantics. Specifically, we add the loss propagating to local hidden positives, semantically similar nearby patches, in proportion to the predefined similarity scores. With these training schemes, our proposed method achieves new state-of-the-art (SOTA) results in COCO-stuff, Cityscapes, and Potsdam-3 datasets. Our code is available at: https://github.com/hynnsk/HP.

Results

TaskDatasetMetricValueModel
Semantic SegmentationPotsdam-3Accuracy82.4HP
Semantic SegmentationCityscapes testAccuracy80.1HP
Semantic SegmentationCityscapes testmIoU18.4HP
Semantic SegmentationCOCO-Stuff-27Clustering [Accuracy]57.2HP (ViT-S/8)
Semantic SegmentationCOCO-Stuff-27Clustering [mIoU]24.6HP (ViT-S/8)
Semantic SegmentationCOCO-Stuff-27Linear Classifier [Accuracy]75.6HP (ViT-S/8)
Semantic SegmentationCOCO-Stuff-27Linear Classifier [mIoU]42.7HP (ViT-S/8)
Semantic SegmentationCOCO-Stuff-27Clustering [Accuracy]54.5HP (ViT-S/16)
Semantic SegmentationCOCO-Stuff-27Clustering [mIoU]24.3HP (ViT-S/16)
Semantic SegmentationCOCO-Stuff-27Linear Classifier [Accuracy]74.1HP (ViT-S/16)
Semantic SegmentationCOCO-Stuff-27Linear Classifier [mIoU]39.1HP (ViT-S/16)
Unsupervised Semantic SegmentationPotsdam-3Accuracy82.4HP
Unsupervised Semantic SegmentationCityscapes testAccuracy80.1HP
Unsupervised Semantic SegmentationCityscapes testmIoU18.4HP
Unsupervised Semantic SegmentationCOCO-Stuff-27Clustering [Accuracy]57.2HP (ViT-S/8)
Unsupervised Semantic SegmentationCOCO-Stuff-27Clustering [mIoU]24.6HP (ViT-S/8)
Unsupervised Semantic SegmentationCOCO-Stuff-27Linear Classifier [Accuracy]75.6HP (ViT-S/8)
Unsupervised Semantic SegmentationCOCO-Stuff-27Linear Classifier [mIoU]42.7HP (ViT-S/8)
Unsupervised Semantic SegmentationCOCO-Stuff-27Clustering [Accuracy]54.5HP (ViT-S/16)
Unsupervised Semantic SegmentationCOCO-Stuff-27Clustering [mIoU]24.3HP (ViT-S/16)
Unsupervised Semantic SegmentationCOCO-Stuff-27Linear Classifier [Accuracy]74.1HP (ViT-S/16)
Unsupervised Semantic SegmentationCOCO-Stuff-27Linear Classifier [mIoU]39.1HP (ViT-S/16)
10-shot image generationPotsdam-3Accuracy82.4HP
10-shot image generationCityscapes testAccuracy80.1HP
10-shot image generationCityscapes testmIoU18.4HP
10-shot image generationCOCO-Stuff-27Clustering [Accuracy]57.2HP (ViT-S/8)
10-shot image generationCOCO-Stuff-27Clustering [mIoU]24.6HP (ViT-S/8)
10-shot image generationCOCO-Stuff-27Linear Classifier [Accuracy]75.6HP (ViT-S/8)
10-shot image generationCOCO-Stuff-27Linear Classifier [mIoU]42.7HP (ViT-S/8)
10-shot image generationCOCO-Stuff-27Clustering [Accuracy]54.5HP (ViT-S/16)
10-shot image generationCOCO-Stuff-27Clustering [mIoU]24.3HP (ViT-S/16)
10-shot image generationCOCO-Stuff-27Linear Classifier [Accuracy]74.1HP (ViT-S/16)
10-shot image generationCOCO-Stuff-27Linear Classifier [mIoU]39.1HP (ViT-S/16)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17