Özgün Çiçek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, Olaf Ronneberger
This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Semantic Segmentation | ShapeNet-Part | Instance Average IoU | 84.6 | 3D-UNet [Cicek:2016un] |
| Instance Segmentation | ScanNet(v2) | mAP @ 50 | 31.9 | UNet-Backbone |
| 10-shot image generation | ShapeNet-Part | Instance Average IoU | 84.6 | 3D-UNet [Cicek:2016un] |
| 3D Instance Segmentation | ScanNet(v2) | mAP @ 50 | 31.9 | UNet-Backbone |