TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Prototype as Query for Few Shot Semantic Segmentation

Prototype as Query for Few Shot Semantic Segmentation

Leilei Cao, Yibo Guo, Ye Yuan, Qiangguo Jin

2022-11-27Few-Shot Semantic Segmentation
PaperPDFCode(official)

Abstract

Few-shot Semantic Segmentation (FSS) was proposed to segment unseen classes in a query image, referring to only a few annotated examples named support images. One of the characteristics of FSS is spatial inconsistency between query and support targets, e.g., texture or appearance. This greatly challenges the generalization ability of methods for FSS, which requires to effectively exploit the dependency of the query image and the support examples. Most existing methods abstracted support features into prototype vectors and implemented the interaction with query features using cosine similarity or feature concatenation. However, this simple interaction may not capture spatial details in query features. To alleviate this limitation, a few methods utilized all pixel-wise support information via computing the pixel-wise correlations between paired query and support features implemented with the attention mechanism of Transformer. These approaches suffer from heavy computation on the dot-product attention between all pixels of support and query features. In this paper, we propose a simple yet effective framework built upon Transformer termed as ProtoFormer to fully capture spatial details in query features. It views the abstracted prototype of the target class in support features as Query and the query features as Key and Value embeddings, which are input to the Transformer decoder. In this way, the spatial details can be better captured and the semantic features of target class in the query image can be focused. The output of the Transformer-based module can be viewed as semantic-aware dynamic kernels to filter out the segmentation mask from the enriched query features. Extensive experiments on PASCAL-$5^{i}$ and COCO-$20^{i}$ show that our ProtoFormer significantly advances the state-of-the-art methods.

Results

TaskDatasetMetricValueModel
Few-Shot LearningCOCO-20i (5-shot)FB-IoU74.6ProtoFormer (ResNet-101)
Few-Shot LearningCOCO-20i (5-shot)Mean IoU54.7ProtoFormer (ResNet-101)
Few-Shot LearningCOCO-20i (5-shot)FB-IoU73.3ProtoFormer (ResNet-50)
Few-Shot LearningCOCO-20i (5-shot)Mean IoU53.4ProtoFormer (ResNet-50)
Few-Shot LearningPASCAL-5i (1-Shot)FB-IoU72.6ProtoFormer (ResNet-101)
Few-Shot LearningPASCAL-5i (1-Shot)Mean IoU63.2ProtoFormer (ResNet-101)
Few-Shot LearningPASCAL-5i (1-Shot)FB-IoU72.6ProtoFormer (ResNet-50)
Few-Shot LearningPASCAL-5i (1-Shot)Mean IoU63.1ProtoFormer (ResNet-50)
Few-Shot LearningCOCO-20i (1-shot)FB-IoU70ProtoFormer (ResNet-101)
Few-Shot LearningCOCO-20i (1-shot)Mean IoU47ProtoFormer (ResNet-101)
Few-Shot LearningCOCO-20i (1-shot)FB-IoU69.6ProtoFormer (ResNet-50)
Few-Shot LearningCOCO-20i (1-shot)Mean IoU45.7ProtoFormer (ResNet-50)
Few-Shot LearningPASCAL-5i (5-Shot)FB-IoU77.1ProtoFormer (ResNet-50)
Few-Shot LearningPASCAL-5i (5-Shot)Mean IoU67.4ProtoFormer (ResNet-50)
Few-Shot LearningPASCAL-5i (5-Shot)FB-IoU76.3ProtoFormer (ResNet-101)
Few-Shot LearningPASCAL-5i (5-Shot)Mean IoU67ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationCOCO-20i (5-shot)FB-IoU74.6ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationCOCO-20i (5-shot)Mean IoU54.7ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationCOCO-20i (5-shot)FB-IoU73.3ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationCOCO-20i (5-shot)Mean IoU53.4ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationPASCAL-5i (1-Shot)FB-IoU72.6ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationPASCAL-5i (1-Shot)Mean IoU63.2ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationPASCAL-5i (1-Shot)FB-IoU72.6ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationPASCAL-5i (1-Shot)Mean IoU63.1ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationCOCO-20i (1-shot)FB-IoU70ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationCOCO-20i (1-shot)Mean IoU47ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationCOCO-20i (1-shot)FB-IoU69.6ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationCOCO-20i (1-shot)Mean IoU45.7ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationPASCAL-5i (5-Shot)FB-IoU77.1ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationPASCAL-5i (5-Shot)Mean IoU67.4ProtoFormer (ResNet-50)
Few-Shot Semantic SegmentationPASCAL-5i (5-Shot)FB-IoU76.3ProtoFormer (ResNet-101)
Few-Shot Semantic SegmentationPASCAL-5i (5-Shot)Mean IoU67ProtoFormer (ResNet-101)
Meta-LearningCOCO-20i (5-shot)FB-IoU74.6ProtoFormer (ResNet-101)
Meta-LearningCOCO-20i (5-shot)Mean IoU54.7ProtoFormer (ResNet-101)
Meta-LearningCOCO-20i (5-shot)FB-IoU73.3ProtoFormer (ResNet-50)
Meta-LearningCOCO-20i (5-shot)Mean IoU53.4ProtoFormer (ResNet-50)
Meta-LearningPASCAL-5i (1-Shot)FB-IoU72.6ProtoFormer (ResNet-101)
Meta-LearningPASCAL-5i (1-Shot)Mean IoU63.2ProtoFormer (ResNet-101)
Meta-LearningPASCAL-5i (1-Shot)FB-IoU72.6ProtoFormer (ResNet-50)
Meta-LearningPASCAL-5i (1-Shot)Mean IoU63.1ProtoFormer (ResNet-50)
Meta-LearningCOCO-20i (1-shot)FB-IoU70ProtoFormer (ResNet-101)
Meta-LearningCOCO-20i (1-shot)Mean IoU47ProtoFormer (ResNet-101)
Meta-LearningCOCO-20i (1-shot)FB-IoU69.6ProtoFormer (ResNet-50)
Meta-LearningCOCO-20i (1-shot)Mean IoU45.7ProtoFormer (ResNet-50)
Meta-LearningPASCAL-5i (5-Shot)FB-IoU77.1ProtoFormer (ResNet-50)
Meta-LearningPASCAL-5i (5-Shot)Mean IoU67.4ProtoFormer (ResNet-50)
Meta-LearningPASCAL-5i (5-Shot)FB-IoU76.3ProtoFormer (ResNet-101)
Meta-LearningPASCAL-5i (5-Shot)Mean IoU67ProtoFormer (ResNet-101)

Related Papers

Adapter Naturally Serves as Decoupler for Cross-Domain Few-Shot Semantic Segmentation2025-06-09DINOv2-powered Few-Shot Semantic Segmentation: A Unified Framework via Cross-Model Distillation and 4D Correlation Mining2025-04-22FSSUWNet: Mitigating the Fragility of Pre-trained Models with Feature Enhancement for Few-Shot Semantic Segmentation in Underwater Images2025-04-01Exploring Few-Shot Defect Segmentation in General Industrial Scenarios with Metric Learning and Vision Foundation Models2025-02-03AdaSemSeg: An Adaptive Few-shot Semantic Segmentation of Seismic Facies2025-01-28Overcoming Support Dilution for Robust Few-shot Semantic Segmentation2025-01-23Few-shot Structure-Informed Machinery Part Segmentation with Foundation Models and Graph Neural Networks2025-01-17DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation2025-01-01