Scaling nnU-Net for CBCT Segmentation
- URL: http://arxiv.org/abs/2411.17213v1
- Date: Tue, 26 Nov 2024 08:29:24 GMT
- Title: Scaling nnU-Net for CBCT Segmentation
- Authors: Fabian Isensee, Yannick Kirchhoff, Lars Kraemer, Maximilian Rokuss, Constantin Ulrich, Klaus H. Maier-Hein,
- Abstract summary: This paper presents our approach to scaling the nnU-Net framework for multi-structure segmentation on Cone Beam Computed Tomography (CBCT) images.
We leveraged the nnU-Net ResEnc L model, introducing key modifications to patch size, network topology, and data augmentation strategies to address the challenges of dental CBCT imaging.
Our method achieved a mean Dice coefficient of 0.9253 and HD95 of 18.472 on the test set, securing a mean rank of 4.6 and with it the first place in the ToothFairy2 challenge.
- Score: 0.9854844969061186
- License:
- Abstract: This paper presents our approach to scaling the nnU-Net framework for multi-structure segmentation on Cone Beam Computed Tomography (CBCT) images, specifically in the scope of the ToothFairy2 Challenge. We leveraged the nnU-Net ResEnc L model, introducing key modifications to patch size, network topology, and data augmentation strategies to address the unique challenges of dental CBCT imaging. Our method achieved a mean Dice coefficient of 0.9253 and HD95 of 18.472 on the test set, securing a mean rank of 4.6 and with it the first place in the ToothFairy2 challenge. The source code is publicly available, encouraging further research and development in the field.
Related papers
- FUSegNet: A Deep Convolutional Neural Network for Foot Ulcer
Segmentation [3.880691536038042]
FUSegNet is a new model for foot ulcer segmentation in diabetes patients.
It uses the pre-trained EfficientNet-b7 as a backbone to address the issue of limited training samples.
arXiv Detail & Related papers (2023-05-04T16:07:22Z) - Extending nnU-Net is all you need [2.1729722043371016]
We use nnU-Net to participate in the AMOS2022 challenge, which comes with a unique set of tasks.
The dataset is one of the largest ever created and boasts 15 target structures.
Our final ensemble achieves Dice scores of 90.13 for Task 1 (CT) and 89.06 for Task 2 (CT+MRI) in a 5-fold cross-validation.
arXiv Detail & Related papers (2022-08-23T07:54:29Z) - Highly Accurate Dichotomous Image Segmentation [139.79513044546]
A new task called dichotomous image segmentation (DIS) aims to segment highly accurate objects from natural images.
We collect the first large-scale dataset, DIS5K, which contains 5,470 high-resolution (e.g., 2K, 4K or larger) images.
We also introduce a simple intermediate supervision baseline (IS-Net) using both feature-level and mask-level guidance for DIS model training.
arXiv Detail & Related papers (2022-03-06T20:09:19Z) - TA-Net: Topology-Aware Network for Gland Segmentation [71.52681611057271]
We propose a novel topology-aware network (TA-Net) to accurately separate densely clustered and severely deformed glands.
TA-Net has a multitask learning architecture and enhances the generalization of gland segmentation.
It achieves state-of-the-art performance on the two datasets.
arXiv Detail & Related papers (2021-10-27T17:10:58Z) - Dense Gaussian Processes for Few-Shot Segmentation [66.08463078545306]
We propose a few-shot segmentation method based on dense Gaussian process (GP) regression.
We exploit the end-to-end learning capabilities of our approach to learn a high-dimensional output space for the GP.
Our approach sets a new state-of-the-art for both 1-shot and 5-shot FSS on the PASCAL-5$i$ and COCO-20$i$ benchmarks.
arXiv Detail & Related papers (2021-10-07T17:57:54Z) - Using Out-of-the-Box Frameworks for Unpaired Image Translation and Image
Segmentation for the crossMoDA Challenge [0.6396288020763143]
We use the CUT model for domain adaptation from contrast-enhanced T1 MR to high-resolution T2 MR.
For the segmentation task, we use the nnU-Net framework.
arXiv Detail & Related papers (2021-10-02T08:04:46Z) - CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal
Biomedical Image Real-Time Segmentation [0.0]
We developed a novel light-weight architecture -- Channel-wise Feature Pyramid Network for Medicine.
It achieves comparable segmentation results on all five medical datasets with only 0.65 million parameters, which is about 2% of U-Net, and 8.8 MB memory.
arXiv Detail & Related papers (2021-05-10T02:29:11Z) - SAR-U-Net: squeeze-and-excitation block and atrous spatial pyramid
pooling based residual U-Net for automatic liver CT segmentation [3.192503074844775]
A modified U-Net based framework is presented, which leverages techniques from Squeeze-and-Excitation (SE) block, Atrous Spatial Pyramid Pooling (ASPP) and residual learning.
The effectiveness of the proposed method was tested on two public datasets LiTS17 and SLiver07.
arXiv Detail & Related papers (2021-03-11T02:32:59Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.