TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EVF-SAM: Early Vision-Language Fusion for Text-Prompted Se...

EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model

Yuxuan Zhang, Tianheng Cheng, Rui Hu, Lei Liu, Heng Liu, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang

2024-06-28Referring ExpressionInteractive SegmentationReferring Expression SegmentationSegmentationLanguage Modelling
PaperPDFCode(official)

Abstract

Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting SAM for referring expression segmentation and introduce the Early Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective referring segmentation method which exploits multimodal prompts (i.e., image and text) and comprises a pre-trained vision-language model to generate referring prompts and a SAM model for segmentation. Surprisingly, we observe that: (1) multimodal prompts and (2) vision-language models with early fusion (e.g., BEIT-3) are beneficial for prompting SAM for accurate referring segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3 can obtain state-of-the-art performance on RefCOCO/+/g for referring expression segmentation and demonstrate the superiority of prompting SAM with early vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters achieves remarkably higher performance while reducing nearly 82% of parameters compared to previous SAM methods based on large multimodal models.

Results

TaskDatasetMetricValueModel
Instance SegmentationRefCOCO testAOverall IoU84.2EVF-SAM
Instance SegmentationRefCoCo valOverall IoU82.4EVF-SAM
Instance SegmentationRefCOCO testBOverall IoU80.2EVF-SAM
Instance SegmentationRefCOCOg-testOverall IoU78.3EVF-SAM
Instance SegmentationRefCOCO+ valOverall IoU76.5EVF-SAM
Instance SegmentationRefCOCO+ test BOverall IoU71.9EVF-SAM
Instance SegmentationRefCOCO+ testAOverall IoU80EVF-SAM
Instance SegmentationRefCOCOg-valOverall IoU78.2EVF-SAM
Referring Expression SegmentationRefCOCO testAOverall IoU84.2EVF-SAM
Referring Expression SegmentationRefCoCo valOverall IoU82.4EVF-SAM
Referring Expression SegmentationRefCOCO testBOverall IoU80.2EVF-SAM
Referring Expression SegmentationRefCOCOg-testOverall IoU78.3EVF-SAM
Referring Expression SegmentationRefCOCO+ valOverall IoU76.5EVF-SAM
Referring Expression SegmentationRefCOCO+ test BOverall IoU71.9EVF-SAM
Referring Expression SegmentationRefCOCO+ testAOverall IoU80EVF-SAM
Referring Expression SegmentationRefCOCOg-valOverall IoU78.2EVF-SAM

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17