The impact of training dataset size and ensemble inference strategies on
head and neck auto-segmentation
- URL: http://arxiv.org/abs/2303.17318v1
- Date: Thu, 30 Mar 2023 12:14:07 GMT
- Title: The impact of training dataset size and ensemble inference strategies on
head and neck auto-segmentation
- Authors: Edward G. A. Henderson, Marcel van Herk, Eliana M. Vasquez Osorio
- Abstract summary: Convolutional neural networks (CNNs) are increasingly being used to automate segmentation of organs-at-risk in radiotherapy.
We investigated how much data is required to train accurate and robust head and neck auto-segmentation models.
An established 3D CNN was trained from scratch with different sized datasets (25-1000 scans) to segment the brainstem, parotid glands and spinal cord in CTs.
We evaluated multiple ensemble techniques to improve the performance of these models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) are increasingly being used to automate
segmentation of organs-at-risk in radiotherapy. Since large sets of highly
curated data are scarce, we investigated how much data is required to train
accurate and robust head and neck auto-segmentation models. For this, an
established 3D CNN was trained from scratch with different sized datasets
(25-1000 scans) to segment the brainstem, parotid glands and spinal cord in
CTs. Additionally, we evaluated multiple ensemble techniques to improve the
performance of these models. The segmentations improved with training set size
up to 250 scans and the ensemble methods significantly improved performance for
all organs. The impact of the ensemble methods was most notable in the smallest
datasets, demonstrating their potential for use in cases where large training
datasets are difficult to obtain.
Related papers
- Enhanced segmentation of femoral bone metastasis in CT scans of patients using synthetic data generation with 3D diffusion models [0.06700983301090582]
We propose an automated data pipeline using 3D Denoising Diffusion Probabilistic Models (DDPM) to generalize on new images.
We created 5675 new volumes, then trained 3D U-Net segmentation models on real and synthetic data to compare segmentation performance.
arXiv Detail & Related papers (2024-09-17T09:21:19Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - Transfer learning from a sparsely annotated dataset of 3D medical images [4.477071833136902]
This study explores the use of transfer learning to improve the performance of deep convolutional neural networks for organ segmentation in medical imaging.
A base segmentation model was trained on a large and sparsely annotated dataset; its weights were used for transfer learning on four new down-stream segmentation tasks.
The results showed that transfer learning from the base model was beneficial when small datasets were available.
arXiv Detail & Related papers (2023-11-08T21:31:02Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Quality or Quantity: Toward a Unified Approach for Multi-organ
Segmentation in Body CT [3.188202211222004]
Organ segmentation of medical images is a key step in virtual imaging trials.
In this study, we explored the tradeoffs between quality and quantity.
arXiv Detail & Related papers (2022-03-03T00:48:54Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - 3D Segmentation Networks for Excessive Numbers of Classes: Distinct Bone
Segmentation in Upper Bodies [1.2023648183416153]
This paper discusses the intricacies of training a 3D segmentation network in a many-label setting.
We show necessary modifications in network architecture, loss function, and data augmentation.
As a result, we demonstrate the robustness of our method by automatically segmenting over one hundred distinct bones simultaneously in an end-to-end learnt fashion from a CT-scan.
arXiv Detail & Related papers (2020-10-14T12:54:15Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.