TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/All-pairs Consistency Learning for Weakly Supervised Seman...

All-pairs Consistency Learning for Weakly Supervised Semantic Segmentation

Weixuan Sun, Yanhao Zhang, Zhen Qin, Zheyuan Liu, Lin Cheng, Fanyi Wang, Yiran Zhong, Nick Barnes

2023-08-08Weakly-Supervised Semantic SegmentationWeakly supervised Semantic SegmentationSemantic SegmentationObject LocalizationAll
PaperPDFCode(official)

Abstract

In this work, we propose a new transformer-based regularization to better localize objects for Weakly supervised semantic segmentation (WSSS). In image-level WSSS, Class Activation Map (CAM) is adopted to generate object localization as pseudo segmentation labels. To address the partial activation issue of the CAMs, consistency regularization is employed to maintain activation intensity invariance across various image augmentations. However, such methods ignore pair-wise relations among regions within each CAM, which capture context and should also be invariant across image views. To this end, we propose a new all-pairs consistency regularization (ACR). Given a pair of augmented views, our approach regularizes the activation intensities between a pair of augmented views, while also ensuring that the affinity across regions within each view remains consistent. We adopt vision transformers as the self-attention mechanism naturally embeds pair-wise affinity. This enables us to simply regularize the distance between the attention matrices of augmented image pairs. Additionally, we introduce a novel class-wise localization method that leverages the gradients of the class token. Our method can be seamlessly integrated into existing WSSS methods using transformers without modifying the architectures. We evaluate our method on PASCAL VOC and MS COCO datasets. Our method produces noticeably better class localization maps (67.3% mIoU on PASCAL VOC train), resulting in superior WSSS performances.

Results

TaskDatasetMetricValueModel
Semantic SegmentationCOCO 2014 valmIoU45ACR-WSSS(DeepLabV2-ResNet101)
Semantic SegmentationPASCAL VOC 2012 valMean IoU71.2ACR-WSSS(DeepLabV1-ResNet101)
Semantic SegmentationPASCAL VOC 2012 testMean IoU70.9ACR-WSSS(DeepLabV2-ResNet101)
10-shot image generationCOCO 2014 valmIoU45ACR-WSSS(DeepLabV2-ResNet101)
10-shot image generationPASCAL VOC 2012 valMean IoU71.2ACR-WSSS(DeepLabV1-ResNet101)
10-shot image generationPASCAL VOC 2012 testMean IoU70.9ACR-WSSS(DeepLabV2-ResNet101)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15