TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Segment Anything in High Quality

Segment Anything in High Quality

Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu

2023-06-02NeurIPS 2023 11Zero Shot SegmentationZero-Shot Instance Segmentation
PaperPDFCodeCodeCodeCode(official)

Abstract

The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 10 diverse segmentation datasets across different downstream tasks, where 8 out of them are evaluated in a zero-shot transfer protocol. Our code and pretrained models are at https://github.com/SysCV/SAM-HQ.

Results

TaskDatasetMetricValueModel
Zero Shot SegmentationSegmentation in the WildMean AP49.6Grounded HQ-SAM

Related Papers

Compress Any Segment Anything Model (SAM)2025-07-11Foundation Models for Zero-Shot Segmentation of Scientific Images without AI-Ready Data2025-06-30MRI-CORE: A Foundation Model for Magnetic Resonance Imaging2025-06-13Textile Analysis for Recycling Automation using Transfer Learning and Zero-Shot Foundation Models2025-06-06Zero-Shot Tree Detection and Segmentation from Aerial Forest Imagery2025-06-03Removing Watermarks with Partial Regeneration using Semantic Information2025-05-13Adapting a Segmentation Foundation Model for Medical Image Classification2025-05-09AI-Driven Segmentation and Analysis of Microbial Cells2025-05-01