TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP

Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP

Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, Diana Marculescu

2022-10-09CVPR 2023 1Open Vocabulary Semantic SegmentationSemantic SegmentationOpen-Vocabulary Semantic SegmentationImage Captioning
PaperPDFCode(official)

Abstract

Open-vocabulary semantic segmentation aims to segment an image into semantic regions according to text descriptions, which may not have been seen during training. Recent two-stage methods first generate class-agnostic mask proposals and then leverage pre-trained vision-language models, e.g., CLIP, to classify masked regions. We identify the performance bottleneck of this paradigm to be the pre-trained CLIP model, since it does not perform well on masked images. To address this, we propose to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. We collect training data by mining an existing image-caption dataset (e.g., COCO Captions), using CLIP to match masked image regions to nouns in the image captions. Compared with the more precise and manually annotated segmentation labels with fixed classes (e.g., COCO-Stuff), we find our noisy but diverse dataset can better retain CLIP's generalization ability. Along with finetuning the entire model, we utilize the "blank" areas in masked images using a method we dub mask prompt tuning. Experiments demonstrate mask prompt tuning brings significant improvement without modifying any weights of CLIP, and it can further improve a fully finetuned model. In particular, when trained on COCO and evaluated on ADE20K-150, our best model achieves 29.6% mIoU, which is +8.5% higher than the previous state-of-the-art. For the first time, open-vocabulary generalist models match the performance of supervised specialist models in 2017 without dataset-specific adaptations.

Results

TaskDatasetMetricValueModel
Semantic SegmentationReplicamIoU20.7OVSeg
Open Vocabulary Semantic SegmentationADE20K-847mIoU9OVSeg Swin-B
Open Vocabulary Semantic SegmentationPASCAL Context-459mIoU12.4OVSeg Swin-B
Open Vocabulary Semantic SegmentationPascalVOC-20mIoU94.5OVSeg Swin-B
Open Vocabulary Semantic SegmentationPASCAL Context-59mIoU55.7OVSeg Swin-B
Open Vocabulary Semantic SegmentationADE20K-150mIoU29.6OVSeg Swin-B
10-shot image generationReplicamIoU20.7OVSeg

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Personalized OVSS: Understanding Personal Concept in Open-Vocabulary Semantic Segmentation2025-07-15