TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/PraNet: Parallel Reverse Attention Network for Polyp Segme...

PraNet: Parallel Reverse Attention Network for Polyp Segmentation

Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, Ling Shao

2020-06-13Video Polyp SegmentationCamouflaged Object SegmentationSegmentationMedical Image Segmentation
PaperPDFCodeCodeCode(official)Code

Abstract

Colonoscopy is an effective technique for detecting colorectal polyps, which are highly related to colorectal cancer. In clinical practice, segmenting polyps from colonoscopy images is of great importance since it provides valuable information for diagnosis and surgery. However, accurate polyp segmentation is a challenging task, for two major reasons: (i) the same type of polyps has a diversity of size, color and texture; and (ii) the boundary between a polyp and its surrounding mucosa is not sharp. To address these challenges, we propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images. Specifically, we first aggregate the features in high-level layers using a parallel partial decoder (PPD). Based on the combined feature, we then generate a global map as the initial guidance area for the following components. In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues. Thanks to the recurrent cooperation mechanism between areas and boundaries, our PraNet is capable of calibrating any misaligned predictions, improving the segmentation accuracy. Quantitative and qualitative evaluations on five challenging datasets across six metrics show that our PraNet improves the segmentation accuracy significantly, and presents a number of advantages in terms of generalizability, and real-time segmentation efficiency.

Results

TaskDatasetMetricValueModel
Medical Image SegmentationKvasir-SEGAverage MAE0.03PraNet
Medical Image SegmentationKvasir-SEGS-Measure0.915PraNet
Medical Image SegmentationKvasir-SEGmIoU0.849PraNet
Medical Image SegmentationKvasir-SEGmax E-Measure0.948PraNet
Medical Image SegmentationKvasir-SEGmean Dice0.898PraNet
Medical Image SegmentationETIS-LARIBPOLYPDBAverage MAE0.031PraNet
Medical Image SegmentationETIS-LARIBPOLYPDBS-Measure0.794PraNet
Medical Image SegmentationETIS-LARIBPOLYPDBmIoU0.567PraNet
Medical Image SegmentationETIS-LARIBPOLYPDBmax E-Measure0.841PraNet
Medical Image SegmentationETIS-LARIBPOLYPDBmean Dice0.628PraNet
Medical Image SegmentationCVC-ColonDBAverage MAE0.045PraNet
Medical Image SegmentationCVC-ColonDBS-Measure0.819PraNet
Medical Image SegmentationCVC-ColonDBmIoU0.649PraNet
Medical Image SegmentationCVC-ColonDBmax E-Measure0.869PraNet
Medical Image SegmentationCVC-ColonDBmean Dice0.709PraNet
Medical Image SegmentationCVC-ClinicDBmean Dice0.899PraNet
Medical Image SegmentationSUN-SEG-Easy (Unseen)Dice0.621PraNet
Medical Image SegmentationSUN-SEG-Easy (Unseen)S measure0.733PraNet
Medical Image SegmentationSUN-SEG-Easy (Unseen)Sensitivity0.524PraNet
Medical Image SegmentationSUN-SEG-Easy (Unseen)mean E-measure0.753PraNet
Medical Image SegmentationSUN-SEG-Easy (Unseen)mean F-measure0.632PraNet
Medical Image SegmentationSUN-SEG-Easy (Unseen)weighted F-measure0.572PraNet
Medical Image SegmentationSUN-SEG-Hard (Unseen)Dice0.598PraNet
Medical Image SegmentationSUN-SEG-Hard (Unseen)S-Measure0.717PraNet
Medical Image SegmentationSUN-SEG-Hard (Unseen)Sensitivity0.512PraNet
Medical Image SegmentationSUN-SEG-Hard (Unseen)mean E-measure0.735PraNet
Medical Image SegmentationSUN-SEG-Hard (Unseen)mean F-measure0.607PraNet
Medical Image SegmentationSUN-SEG-Hard (Unseen)weighted F-measure0.544PraNet
Object DetectionPCOD_1200S-Measure0.904PraNet
Object DetectionCAMOMAE0.094PraNet
Object DetectionCAMOS-Measure0.769PraNet
Object DetectionCAMOWeighted F-Measure0.663PraNet
3DPCOD_1200S-Measure0.904PraNet
3DCAMOMAE0.094PraNet
3DCAMOS-Measure0.769PraNet
3DCAMOWeighted F-Measure0.663PraNet
Camouflaged Object SegmentationPCOD_1200S-Measure0.904PraNet
Camouflaged Object SegmentationCAMOMAE0.094PraNet
Camouflaged Object SegmentationCAMOS-Measure0.769PraNet
Camouflaged Object SegmentationCAMOWeighted F-Measure0.663PraNet
Object SegmentationPCOD_1200S-Measure0.904PraNet
Object SegmentationCAMOMAE0.094PraNet
Object SegmentationCAMOS-Measure0.769PraNet
Object SegmentationCAMOWeighted F-Measure0.663PraNet
2D ClassificationPCOD_1200S-Measure0.904PraNet
2D ClassificationCAMOMAE0.094PraNet
2D ClassificationCAMOS-Measure0.769PraNet
2D ClassificationCAMOWeighted F-Measure0.663PraNet
2D Object DetectionPCOD_1200S-Measure0.904PraNet
2D Object DetectionCAMOMAE0.094PraNet
2D Object DetectionCAMOS-Measure0.769PraNet
2D Object DetectionCAMOWeighted F-Measure0.663PraNet
16kPCOD_1200S-Measure0.904PraNet
16kCAMOMAE0.094PraNet
16kCAMOS-Measure0.769PraNet
16kCAMOWeighted F-Measure0.663PraNet

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17