TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Medical Image Segmentation Using Squeeze-and-Expansion Tra...

Medical Image Segmentation Using Squeeze-and-Expansion Transformers

Shaohua Li, Xiuchao Sui, Xiangde Luo, Xinxing Xu, Yong liu, Rick Goh

2021-05-20Tumor SegmentationOptic Disc SegmentationDomain GeneralizationSegmentationOptic Cup SegmentationSemantic SegmentationMedical Image SegmentationBrain Tumor SegmentationImage Segmentation
PaperPDFCodeCode(official)

Abstract

Medical image segmentation is important for computer-aided diagnosis. Good segmentation demands the model to see the big picture and fine details simultaneously, i.e., to learn image features that incorporate large context while keep high spatial resolutions. To approach this goal, the most widely used methods -- U-Net and variants, extract and fuse multi-scale features. However, the fused features still have small "effective receptive fields" with a focus on local image cues, limiting their performance. In this work, we propose Segtran, an alternative segmentation framework based on transformers, which have unlimited "effective receptive fields" even at high feature resolutions. The core of Segtran is a novel Squeeze-and-Expansion transformer: a squeezed attention block regularizes the self attention of transformers, and an expansion block learns diversified representations. Additionally, we propose a new positional encoding scheme for transformers, imposing a continuity inductive bias for images. Experiments were performed on 2D and 3D medical image segmentation tasks: optic disc/cup segmentation in fundus images (REFUGE'20 challenge), polyp segmentation in colonoscopy images, and brain tumor segmentation in MRI scans (BraTS'19 challenge). Compared with representative existing methods, Segtran consistently achieved the highest segmentation accuracy, and exhibited good cross-domain generalization capabilities. The source code of Segtran is released at https://github.com/askerlee/segtran.

Results

TaskDatasetMetricValueModel
Medical Image SegmentationBRATS 2019Avg.0.817Segtran (i3d)
Medical Image SegmentationBRATS 2019TC0.817Segtran (i3d)
Medical Image SegmentationBRATS 2019Avg.0.812Extension of nnU-Net
Medical Image SegmentationBRATS 2019ET0.74Extension of nnU-Net
Medical Image SegmentationBRATS 2019TC0.807Extension of nnU-Net
Medical Image SegmentationBRATS 2019WT0.894Extension of nnU-Net
Medical Image SegmentationBRATS 2019ET0.729Bag of tricks
Medical Image SegmentationBRATS 2019TC0.802Bag of tricks
Medical Image SegmentationBRATS 2019WT0.895Bag of tricks
Optic Cup SegmentationREFUGE ChallengeDice0.872Segtran (EfficientNet-B4)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17