Multi-modal segmentation of 3D brain scans using neural networks
- URL: http://arxiv.org/abs/2008.04594v1
- Date: Tue, 11 Aug 2020 09:13:54 GMT
- Title: Multi-modal segmentation of 3D brain scans using neural networks
- Authors: Jonathan Zopes, Moritz Platscher, Silvio Paganucci, Christian Federau
- Abstract summary: Deep convolutional neural networks are trained to segment 3D MRI (MPRAGE, DWI, FLAIR) and CT scans.
segmentation quality is quantified using the Dice metric for a total of 27 anatomical structures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: To implement a brain segmentation pipeline based on convolutional
neural networks, which rapidly segments 3D volumes into 27 anatomical
structures. To provide an extensive, comparative study of segmentation
performance on various contrasts of magnetic resonance imaging (MRI) and
computed tomography (CT) scans. Methods: Deep convolutional neural networks are
trained to segment 3D MRI (MPRAGE, DWI, FLAIR) and CT scans. A large database
of in total 851 MRI/CT scans is used for neural network training. Training
labels are obtained on the MPRAGE contrast and coregistered to the other
imaging modalities. The segmentation quality is quantified using the Dice
metric for a total of 27 anatomical structures. Dropout sampling is implemented
to identify corrupted input scans or low-quality segmentations. Full
segmentation of 3D volumes with more than 2 million voxels is obtained in less
than 1s of processing time on a graphical processing unit. Results: The best
average Dice score is found on $T_1$-weighted MPRAGE ($85.3\pm4.6\,\%$).
However, for FLAIR ($80.0\pm7.1\,\%$), DWI ($78.2\pm7.9\,\%$) and CT ($79.1\pm
7.9\,\%$), good-quality segmentation is feasible for most anatomical
structures. Corrupted input volumes or low-quality segmentations can be
detected using dropout sampling. Conclusion: The flexibility and performance of
deep convolutional neural networks enables the direct, real-time segmentation
of FLAIR, DWI and CT scans without requiring $T_1$-weighted scans.
Related papers
- Multimodal 3D Brain Tumor Segmentation with Adversarial Training and Conditional Random Field [44.027635932094064]
We propose a multimodal 3D Volume Generative Adversarial Network (3D-vGAN) for precise segmentation.
The model utilizes Pseudo-3D for V-net improvement, adds conditional random field after generator and use original image as supplemental guidance.
Results, using the BraTS-2018 dataset, show that 3D-vGAN outperforms classical segmentation models, including U-net, Gan, FCN and 3D V-net, reaching specificity over 99.8%.
arXiv Detail & Related papers (2024-11-21T18:52:02Z) - Acute ischemic stroke lesion segmentation in non-contrast CT images
using 3D convolutional neural networks [0.0]
We propose an automatic algorithm aimed at volumetric segmentation of acute ischemic stroke lesion in non-contrast computed tomography brain 3D images.
Our deep-learning approach is based on the popular 3D U-Net convolutional neural network architecture.
arXiv Detail & Related papers (2023-01-17T10:39:39Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Med-DANet: Dynamic Architecture Network for Efficient Medical Volumetric
Segmentation [13.158995287578316]
We propose a dynamic architecture network named Med-DANet to achieve effective accuracy and efficiency trade-off.
For each slice of the input 3D MRI volume, our proposed method learns a slice-specific decision by the Decision Network.
Our proposed method achieves comparable or better results than previous state-of-the-art methods for 3D MRI brain tumor segmentation.
arXiv Detail & Related papers (2022-06-14T03:25:58Z) - Deep Learning Framework for Real-time Fetal Brain Segmentation in MRI [15.530500862944818]
We analyze the speed-accuracy performance of a variety of deep neural network models.
We devised a symbolically small convolutional neural network that combines spatial details at high resolution with context features extracted at lower resolutions.
We trained our model as well as eight alternative, state-of-the-art networks with manually-labeled fetal brain MRI slices.
arXiv Detail & Related papers (2022-05-02T20:43:14Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Hierarchical 3D Feature Learning for Pancreas Segmentation [11.588903060674344]
We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans.
Our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%.
Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding.
arXiv Detail & Related papers (2021-09-03T09:27:07Z) - A self-supervised learning strategy for postoperative brain cavity
segmentation simulating resections [46.414990784180546]
Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique.
CNNs require large annotated datasets for training.
Self-supervised learning strategies can leverage unlabeled data for training.
arXiv Detail & Related papers (2021-05-24T12:27:06Z) - MixNet: Multi-modality Mix Network for Brain Segmentation [8.44876865136712]
MixNet is a 2D semantic-wise deep convolutional neural network to segment brain structure in MRI images.
MixNetv2 was submitted to the MRBrainS challenge at MICCAI 2018 and won the 3rd place in the 3-label task.
arXiv Detail & Related papers (2020-04-21T08:55:55Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.