TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Breast Cancer Diagnosis in Two-View Mammography Using End-...

Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional Network

Daniel G. P. Petrini, Carlos Shimizu, Rosimeire A. Roela, Gabriel V. Valente, Maria A. A. K. Folgueira, Hae Yong Kim

2021-10-01IEEE Access 2022 7Cancer-no cancer per breast classificationCancer-no cancer per image classificationTransfer LearningSpecificity
PaperPDFCode(official)

Abstract

Some recent studies have described deep convolutional neural networks to diagnose breast cancer in mammograms with similar or even superior performance to that of human experts. One of the best techniques does two transfer learnings: the first uses a model trained on natural images to create a "patch classifier" that categorizes small subimages; the second uses the patch classifier to scan the whole mammogram and create the "single-view whole-image classifier". We propose to make a third transfer learning to obtain a "two-view classifier" to use the two mammographic views: bilateral craniocaudal and mediolateral oblique. We use EfficientNet as the basis of our model. We "end-to-end" train the entire system using CBIS-DDSM dataset. To ensure statistical robustness, we test our system twice using: (a) 5-fold cross validation; and (b) the original training/test division of the dataset. Our technique reached an AUC of 0.9344 using 5-fold cross validation (accuracy, sensitivity and specificity are 85.13% at the equal error rate point of ROC). Using the original dataset division, our technique achieved an AUC of 0.8483, as far as we know the highest reported AUC for this problem, although the subtle differences in the testing conditions of each work do not allow for an accurate comparison. The inference code and model are available at https://github.com/dpetrini/two-views-classifier

Results

TaskDatasetMetricValueModel
Binary ClassificationCBIS-DDSMAUC0.8033SingleView_PatchBased_EfficientNet-B0
Binary ClassificationCBIS-DDSMAUC0.7952SingleView_PatchBased_EfficientNet-B3
Binary ClassificationCBIS-DDSMAUC0.75VGG/ResNet
Binary ClassificationCBIS-DDSMAUC0.75VGG/ResNet
Binary ClassificationCBIS-DDSMAUC0.8483EfficientNet-B0 w/ TTA
Binary ClassificationCBIS-DDSMAUC0.8418EfficientNet-B0

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15Calibrated and Robust Foundation Models for Vision-Language and Medical Image Tasks Under Distribution Shift2025-07-12The Bayesian Approach to Continual Learning: An Overview2025-07-11RadiomicsRetrieval: A Customizable Framework for Medical Image Retrieval Using Radiomics Features2025-07-11Contrastive and Transfer Learning for Effective Audio Fingerprinting through a Real-World Evaluation Protocol2025-07-08