TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Computer Vision/Zero-Shot Semantic Segmentation/COCO-Stuff

Zero-Shot Semantic Segmentation on COCO-Stuff

Metric: Transductive Setting hIoU (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Transductive Setting hIoU▼Extra DataPaperDate↕Code
1OTSeg+49.8NoOTSeg: Multi-prompt Sinkhorn Attention for Zero-...2024-03-21Code
2CLIP-RC49.7No--Code
3OTSeg49.5NoOTSeg: Multi-prompt Sinkhorn Attention for Zero-...2024-03-21Code
4ZegCLIP48.5NoZegCLIP: Towards Adapting CLIP for Zero-shot Sem...2022-12-07Code
5MVP-SEG+45.5NoMVP-SEG: Multi-View Prompt Learning for Open-Voc...2023-04-14-
6FreeSeg45.3NoFreeSeg: Free Mask from Interpretable Contrastiv...2022-09-27-
7MaskCLIP+45NoExtract Free Dense Labels from CLIP2021-12-02Code
8zsseg41.5NoA Simple Baseline for Open-Vocabulary Semantic S...2021-12-29Code
9STRICT34.8NoA Closer Look at Self-training for Zero-Label Se...2021-04-21Code
10SPNet30.3No--Code
11CaGNet19.5NoContext-aware Feature Generation for Zero-shot S...2020-08-16Code
12ZS516.2NoZero-Shot Semantic Segmentation2019-06-03Code