TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MemorySAM: Memorize Modalities and Semantics with Segment ...

MemorySAM: Memorize Modalities and Semantics with Segment Anything Model 2 for Multi-modal Semantic Segmentation

Chenfei Liao, Xu Zheng, Yuanhuiyi Lyu, Haiwei Xue, Yihong Cao, Jiawen Wang, Kailun Yang, Xuming Hu

2025-03-09Semantic Segmentation
PaperPDFCode

Abstract

Research has focused on Multi-Modal Semantic Segmentation (MMSS), where pixel-wise predictions are derived from multiple visual modalities captured by diverse sensors. Recently, the large vision model, Segment Anything Model 2 (SAM2), has shown strong zero-shot segmentation performance on both images and videos. When extending SAM2 to MMSS, two issues arise: 1. How can SAM2 be adapted to multi-modal data? 2. How can SAM2 better understand semantics? Inspired by cross-frame correlation in videos, we propose to treat multi-modal data as a sequence of frames representing the same scene. Our key idea is to ''memorize'' the modality-agnostic information and 'memorize' the semantics related to the targeted scene. To achieve this, we apply SAM2's memory mechanisms across multi-modal data to capture modality-agnostic features. Meanwhile, to memorize the semantic knowledge, we propose a training-only Semantic Prototype Memory Module (SPMM) to store category-level prototypes across training for facilitating SAM2's transition from instance to semantic segmentation. A prototypical adaptation loss is imposed between global and local prototypes iteratively to align and refine SAM2's semantic understanding. Extensive experimental results demonstrate that our proposed MemorySAM outperforms SoTA methods by large margins on both synthetic and real-world benchmarks (65.38% on DELIVER, 52.88% on MCubeS). Source code will be made publicly available.

Results

TaskDatasetMetricValueModel
Semantic SegmentationMCubeSmIoU52.88MemorySAM-B+(RGB-A-D-N)
Semantic SegmentationMCubeSmIoU52.2MemorySAM-B+(RGB-A-D)
Semantic SegmentationMCubeSmIoU51.2MemorySAM-B+(RGB-A)
Semantic SegmentationDeLiVER mIoU65.38MemorySAM-B+(R-D-E-L)
Semantic SegmentationDeLiVER mIoU63.48MemorySAM-B+(R-D)
Semantic SegmentationDeLiVER mIoU62.42MemorySAM-B+(R-D-E)
Semantic SegmentationDeLiVER mIoU53.22MemorySAM-B+(RGB)
10-shot image generationMCubeSmIoU52.88MemorySAM-B+(RGB-A-D-N)
10-shot image generationMCubeSmIoU52.2MemorySAM-B+(RGB-A-D)
10-shot image generationMCubeSmIoU51.2MemorySAM-B+(RGB-A)
10-shot image generationDeLiVER mIoU65.38MemorySAM-B+(R-D-E-L)
10-shot image generationDeLiVER mIoU63.48MemorySAM-B+(R-D)
10-shot image generationDeLiVER mIoU62.42MemorySAM-B+(R-D-E)
10-shot image generationDeLiVER mIoU53.22MemorySAM-B+(RGB)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15