TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MobileUNETR: A Lightweight End-To-End Hybrid Vision Transf...

MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation

Shehan Perera, Yunus Erzurumlu, Deepak Gulati, Alper Yilmaz

2024-09-04Skin Lesion SegmentationLesion SegmentationSemantic SegmentationMedical Image SegmentationMedical Image AnalysisSkin Cancer SegmentationImage Segmentation
PaperPDFCode(official)

Abstract

Skin cancer segmentation poses a significant challenge in medical image analysis. Numerous existing solutions, predominantly CNN-based, face issues related to a lack of global contextual understanding. Alternatively, some approaches resort to large-scale Transformer models to bridge the global contextual gaps, but at the expense of model size and computational complexity. Finally many Transformer based approaches rely primarily on CNN based decoders overlooking the benefits of Transformer based decoding models. Recognizing these limitations, we address the need efficient lightweight solutions by introducing MobileUNETR, which aims to overcome the performance constraints associated with both CNNs and Transformers while minimizing model size, presenting a promising stride towards efficient image segmentation. MobileUNETR has 3 main features. 1) MobileUNETR comprises of a lightweight hybrid CNN-Transformer encoder to help balance local and global contextual feature extraction in an efficient manner; 2) A novel hybrid decoder that simultaneously utilizes low-level and global features at different resolutions within the decoding stage for accurate mask generation; 3) surpassing large and complex architectures, MobileUNETR achieves superior performance with 3 million parameters and a computational complexity of 1.3 GFLOP resulting in 10x and 23x reduction in parameters and FLOPS, respectively. Extensive experiments have been conducted to validate the effectiveness of our proposed method on four publicly available skin lesion segmentation datasets, including ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets. The code will be publicly available at: https://github.com/OSUPCVLab/MobileUNETR.git

Results

TaskDatasetMetricValueModel
Medical Image SegmentationISIC2018Accuracy94.4MobileUNETR
Medical Image SegmentationISIC2018mean Dice90.74MobileUNETR
Medical Image SegmentationPH2Dice Score0.957MobileUNETR
Medical Image SegmentationISIC 2018Mean IoU0.8456MobileUNETR
Medical Image SegmentationISIC 2018mean Dice0.9074MobileUNETR
Semantic SegmentationPH2Average Dice95.7MobileUNETR
Semantic SegmentationPH2Average IOU92.3MobileUNETR
10-shot image generationPH2Average Dice95.7MobileUNETR
10-shot image generationPH2Average IOU92.3MobileUNETR

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15