TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Visual Prompting for Generalized Few-shot Segmentation: A ...

Visual Prompting for Generalized Few-shot Segmentation: A Multi-scale Approach

Mir Rayat Imtiaz Hossain, Mennatullah Siam, Leonid Sigal, James J. Little

2024-04-17CVPR 2024 6Semantic Segmentation
PaperPDFCode

Abstract

The emergence of attention-based transformer models has led to their extensive use in various tasks, due to their superior generalization and transfer properties. Recent research has demonstrated that such models, when prompted appropriately, are excellent for few-shot inference. However, such techniques are under-explored for dense prediction tasks like semantic segmentation. In this work, we examine the effectiveness of prompting a transformer-decoder with learned visual prompts for the generalized few-shot segmentation (GFSS) task. Our goal is to achieve strong performance not only on novel categories with limited examples, but also to retain performance on base categories. We propose an approach to learn visual prompts with limited examples. These learned visual prompts are used to prompt a multiscale transformer decoder to facilitate accurate dense predictions. Additionally, we introduce a unidirectional causal attention mechanism between the novel prompts, learned with limited examples, and the base prompts, learned with abundant data. This mechanism enriches the novel prompts without deteriorating the base class performance. Overall, this form of prompting helps us achieve state-of-the-art performance for GFSS on two different benchmark datasets: COCO-$20^i$ and Pascal-$5^i$, without the need for test-time optimization (or transduction). Furthermore, test-time optimization leveraging unlabelled test data can be used to improve the prompts, which we refer to as transductive prompt tuning.

Results

TaskDatasetMetricValueModel
Few-Shot LearningPASCAL-5i (1-Shot)Mean Base and Novel58.11VisualPromptGFSS
Few-Shot LearningPASCAL-5i (5-Shot)Mean Base and Novel66.27VisualPromptGFSS
Few-Shot LearningCOCO-20i (5-shot)Mean Base and Novel42.48VisualPromptGFSS
Few-Shot LearningCOCO-20i (1-shot)Mean Base and Novel36.05VisualPromptGFSS
Few-Shot Semantic SegmentationPASCAL-5i (1-Shot)Mean Base and Novel58.11VisualPromptGFSS
Few-Shot Semantic SegmentationPASCAL-5i (5-Shot)Mean Base and Novel66.27VisualPromptGFSS
Few-Shot Semantic SegmentationCOCO-20i (5-shot)Mean Base and Novel42.48VisualPromptGFSS
Few-Shot Semantic SegmentationCOCO-20i (1-shot)Mean Base and Novel36.05VisualPromptGFSS
Meta-LearningPASCAL-5i (1-Shot)Mean Base and Novel58.11VisualPromptGFSS
Meta-LearningPASCAL-5i (5-Shot)Mean Base and Novel66.27VisualPromptGFSS
Meta-LearningCOCO-20i (5-shot)Mean Base and Novel42.48VisualPromptGFSS
Meta-LearningCOCO-20i (1-shot)Mean Base and Novel36.05VisualPromptGFSS

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15