3D Segmentation Networks for Excessive Numbers of Classes: Distinct Bone
Segmentation in Upper Bodies
- URL: http://arxiv.org/abs/2010.07045v1
- Date: Wed, 14 Oct 2020 12:54:15 GMT
- Title: 3D Segmentation Networks for Excessive Numbers of Classes: Distinct Bone
Segmentation in Upper Bodies
- Authors: Eva Schnider, Antal Horv\'ath, Georg Rauter, Azhar Zam, Magdalena
M\"uller-Gerbl, Philippe C. Cattin
- Abstract summary: This paper discusses the intricacies of training a 3D segmentation network in a many-label setting.
We show necessary modifications in network architecture, loss function, and data augmentation.
As a result, we demonstrate the robustness of our method by automatically segmenting over one hundred distinct bones simultaneously in an end-to-end learnt fashion from a CT-scan.
- Score: 1.2023648183416153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation of distinct bones plays a crucial role in diagnosis, planning,
navigation, and the assessment of bone metastasis. It supplies semantic
knowledge to visualisation tools for the planning of surgical interventions and
the education of health professionals. Fully supervised segmentation of 3D data
using Deep Learning methods has been extensively studied for many tasks but is
usually restricted to distinguishing only a handful of classes. With 125
distinct bones, our case includes many more labels than typical 3D segmentation
tasks. For this reason, the direct adaptation of most established methods is
not possible. This paper discusses the intricacies of training a 3D
segmentation network in a many-label setting and shows necessary modifications
in network architecture, loss function, and data augmentation. As a result, we
demonstrate the robustness of our method by automatically segmenting over one
hundred distinct bones simultaneously in an end-to-end learnt fashion from a
CT-scan.
Related papers
- Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation [25.74088298769155]
We propose a universal training framework called MedContext for 3D medical segmentation.
Our approach effectively learns self supervised contextual cues jointly with the supervised voxel segmentation task.
The effectiveness of MedContext is validated across multiple 3D medical datasets and four state-of-the-art model architectures.
arXiv Detail & Related papers (2024-02-27T17:58:05Z) - The impact of training dataset size and ensemble inference strategies on
head and neck auto-segmentation [0.0]
Convolutional neural networks (CNNs) are increasingly being used to automate segmentation of organs-at-risk in radiotherapy.
We investigated how much data is required to train accurate and robust head and neck auto-segmentation models.
An established 3D CNN was trained from scratch with different sized datasets (25-1000 scans) to segment the brainstem, parotid glands and spinal cord in CTs.
We evaluated multiple ensemble techniques to improve the performance of these models.
arXiv Detail & Related papers (2023-03-30T12:14:07Z) - Continual Segment: Towards a Single, Unified and Accessible Continual
Segmentation Model of 143 Whole-body Organs in CT Scans [31.388497540849297]
We propose a new architectural CSS learning framework to learn a single deep segmentation model for segmenting a total of 143 whole-body organs.
We trained and validated on 3D CT scans of 2500+ patients from four datasets, our single network can segment total 143 whole-body organs with very high accuracy.
arXiv Detail & Related papers (2023-02-01T00:49:21Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - DoDNet: Learning to segment multi-organ and tumors from multiple
partially labeled datasets [102.55303521877933]
We propose a dynamic on-demand network (DoDNet) that learns to segment multiple organs and tumors on partially labelled datasets.
DoDNet consists of a shared encoder-decoder architecture, a task encoding module, a controller for generating dynamic convolution filters, and a single but dynamic segmentation head.
arXiv Detail & Related papers (2020-11-20T04:56:39Z) - SAR: Scale-Aware Restoration Learning for 3D Tumor Segmentation [23.384259038420005]
We propose Scale-Aware Restoration (SAR) for 3D tumor segmentation.
A novel proxy task, i.e. scale discrimination, is formulated to pre-train the 3D neural network combined with the self-restoration task.
We demonstrate the effectiveness of our methods on two downstream tasks: i.e. Brain tumor segmentation, ii. Pancreas tumor segmentation.
arXiv Detail & Related papers (2020-10-13T01:23:17Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.