TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Models Genesis: Generic Autodidactic Models for 3D Medical...

Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis

Zongwei Zhou, Vatsal Sodha, Md Mahfuzur Rahman Siddiquee, Ruibin Feng, Nima Tajbakhsh, Michael B. Gotway, Jianming Liang

2019-08-19AnatomyLung Nodule SegmentationSelf-Supervised LearningTransfer LearningMedical Image SegmentationMedical Image AnalysisLiver SegmentationBrain Tumor SegmentationLung Nodule Detection
PaperPDFCodeCode(official)

Abstract

Transfer learning from natural image to medical image has established as one of the most practical paradigms in deep learning for medical image analysis. However, to fit this paradigm, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information and inevitably compromising the performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learned by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of our Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.

Results

TaskDatasetMetricValueModel
Medical Image SegmentationMedical Segmentation DecathlonDice (Average)76.97Models Genesis
Medical Image SegmentationMedical Segmentation DecathlonNSD87.19Models Genesis
Medical Image SegmentationBRATS-2013Dice Score0.9258ModelGenesis
Medical Image SegmentationLiTS2017Dice91.13ModelGenesis
Medical Image SegmentationLiTS2017IoU79.52ModelGenesis
Medical Image SegmentationLIDC-IDRIDice75.86ModelGenesis
Medical Image SegmentationLIDC-IDRIIoU77.62ModelGenesis
Lung Nodule DetectionLUNA2016 FPREDAUC98.2ModelGenesis
Pulmonary Embolism DetectionPE-CAD FPREDAUC88.04ModelGenesis

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16Are Vision Foundation Models Ready for Out-of-the-Box Medical Image Registration?2025-07-15