DR-Unet104 for Multimodal MRI brain tumor segmentation
- URL: http://arxiv.org/abs/2011.02840v2
- Date: Tue, 4 May 2021 14:25:49 GMT
- Title: DR-Unet104 for Multimodal MRI brain tumor segmentation
- Authors: Jordan Colman, Lei Zhang, Wenting Duan and Xujiong Ye
- Abstract summary: We propose a 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs.
We make multiple additions to the Unet architecture, including adding the 'bottleneck' residual block to the Unet encoder and adding dropout after each convolution block stack.
We produced a competitive lesion segmentation architecture, despite only 2D convolutions, having the added benefit that it can be used on lower power computers.
- Score: 7.786297008452384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a 2D deep residual Unet with 104 convolutional
layers (DR-Unet104) for lesion segmentation in brain MRIs. We make multiple
additions to the Unet architecture, including adding the 'bottleneck' residual
block to the Unet encoder and adding dropout after each convolution block
stack. We verified the effect of introducing the regularisation of dropout with
small rate (e.g. 0.2) on the architecture, and found a dropout of 0.2 improved
the overall performance compared to no dropout, or a dropout of 0.5. We
evaluated the proposed architecture as part of the Multimodal Brain Tumor
Segmentation (BraTS) 2020 Challenge and compared our method to DeepLabV3+ with
a ResNet-V2-152 backbone. We found that the DR-Unet104 achieved a mean dice
score coefficient of 0.8862, 0.6756 and 0.6721 for validation data, whole
tumor, enhancing tumor and tumor core respectively, an overall improvement on
0.8770, 0.65242 and 0.68134 achieved by DeepLabV3+. Our method produced a final
mean DSC of 0.8673, 0.7514 and 0.7983 on whole tumor, enhancing tumor and tumor
core on the challenge's testing data. We produced a competitive lesion
segmentation architecture, despite only 2D convolutions, having the added
benefit that it can be used on lower power computers than a 3D architecture.
The source code and trained model for this work is openly available at
https://github.com/jordan-colman/DR-Unet104.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Hybrid Window Attention Based Transformer Architecture for Brain Tumor
Segmentation [28.650980942429726]
We propose a volumetric vision transformer that follows two windowing strategies in attention for extracting fine features.
We trained and evaluated network architecture on the FeTS Challenge 2022 dataset.
Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.71%, 91.38% and 85.40%.
arXiv Detail & Related papers (2022-09-16T03:55:48Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - HNF-Netv2 for Brain Tumor Segmentation using multi-modal MR Imaging [86.52489226518955]
We extend our HNF-Net to HNF-Netv2 by adding inter-scale and intra-scale semantic discrimination enhancing blocks.
Our method won the RSNA 2021 Brain Tumor AI Challenge Prize (Segmentation Task)
arXiv Detail & Related papers (2022-02-10T06:34:32Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Brain Tumor Segmentation and Survival Prediction using Automatic Hard
mining in 3D CNN Architecture [0.30098583327398537]
We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas and its constituents from multimodal Magnetic Resonance Images (MRI)
The architecture uses dense connectivity patterns to reduce the number of weights and residual connections and is 0.448 with weights obtained from training this model with BraTS 2018 dataset.
Hard mining is done during training to train for the difficult cases of segmentation tasks by increasing the dice similarity coefficient (DSC) threshold to choose the hard cases as epoch increases.
arXiv Detail & Related papers (2021-01-05T14:34:16Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Brain tumour segmentation using cascaded 3D densely-connected U-net [10.667165962654996]
We propose a deep-learning based method to segment a brain tumour into its subregions.
The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture.
Experimental results on the BraTS20 validation dataset demonstrate that the proposed model achieved average Dice Scores of 0.90, 0.82, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.
arXiv Detail & Related papers (2020-09-16T09:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.