Brain Tumor Segmentation and Survival Prediction using Automatic Hard
mining in 3D CNN Architecture
- URL: http://arxiv.org/abs/2101.01546v1
- Date: Tue, 5 Jan 2021 14:34:16 GMT
- Title: Brain Tumor Segmentation and Survival Prediction using Automatic Hard
mining in 3D CNN Architecture
- Authors: Vikas Kumar Anand, Sanjeev Grampurohit, Pranav Aurangabadkar, Avinash
Kori, Mahendra Khened, Raghavendra S Bhat, Ganapathy Krishnamurthi
- Abstract summary: We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas and its constituents from multimodal Magnetic Resonance Images (MRI)
The architecture uses dense connectivity patterns to reduce the number of weights and residual connections and is 0.448 with weights obtained from training this model with BraTS 2018 dataset.
Hard mining is done during training to train for the difficult cases of segmentation tasks by increasing the dice similarity coefficient (DSC) threshold to choose the hard cases as epoch increases.
- Score: 0.30098583327398537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas
and its constituents from multimodal Magnetic Resonance Images (MRI). The
architecture uses dense connectivity patterns to reduce the number of weights
and residual connections and is initialized with weights obtained from training
this model with BraTS 2018 dataset. Hard mining is done during training to
train for the difficult cases of segmentation tasks by increasing the dice
similarity coefficient (DSC) threshold to choose the hard cases as epoch
increases. On the BraTS2020 validation data (n = 125), this architecture
achieved a tumor core, whole tumor, and active tumor dice of 0.744, 0.876,
0.714,respectively. On the test dataset, we get an increment in DSC of tumor
core and active tumor by approximately 7%. In terms of DSC, our network
performances on the BraTS 2020 test data are 0.775, 0.815, and 0.85 for
enhancing tumor, tumor core, and whole tumor, respectively. Overall survival of
a subject is determined using conventional machine learning from rediomics
features obtained using a generated segmentation mask. Our approach has
achieved 0.448 and 0.452 as the accuracy on the validation and test dataset.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Multi-class Brain Tumor Segmentation using Graph Attention Network [3.3635982995145994]
This work introduces an efficient brain tumor summation model by exploiting the advancement in MRI and graph neural networks (GNNs)
The model represents the volumetric MRI as a region adjacency graph (RAG) and learns to identify the type of tumors through a graph attention network (GAT)
arXiv Detail & Related papers (2023-02-11T04:30:40Z) - Hybrid Window Attention Based Transformer Architecture for Brain Tumor
Segmentation [28.650980942429726]
We propose a volumetric vision transformer that follows two windowing strategies in attention for extracting fine features.
We trained and evaluated network architecture on the FeTS Challenge 2022 dataset.
Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.71%, 91.38% and 85.40%.
arXiv Detail & Related papers (2022-09-16T03:55:48Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Glioma Prognosis: Segmentation of the Tumor and Survival Prediction
using Shape, Geometric and Clinical Information [13.822139791199106]
We exploit a convolutional neural network (CNN) with hypercolumn technique to segment tumor from healthy brain tissue.
Our model achieves a mean dice accuracy of 87.315%, 77.04% and 70.22% for the whole tumor, tumor core and enhancing tumor respectively.
arXiv Detail & Related papers (2021-04-02T10:49:05Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - DR-Unet104 for Multimodal MRI brain tumor segmentation [7.786297008452384]
We propose a 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs.
We make multiple additions to the Unet architecture, including adding the 'bottleneck' residual block to the Unet encoder and adding dropout after each convolution block stack.
We produced a competitive lesion segmentation architecture, despite only 2D convolutions, having the added benefit that it can be used on lower power computers.
arXiv Detail & Related papers (2020-11-04T01:24:26Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.