Does anatomical contextual information improve 3D U-Net based brain
tumor segmentation?
- URL: http://arxiv.org/abs/2010.13460v3
- Date: Fri, 4 Mar 2022 12:13:01 GMT
- Title: Does anatomical contextual information improve 3D U-Net based brain
tumor segmentation?
- Authors: Iulian Emil Tampu and Neda Haj-Hosseini and Anders Eklund
- Abstract summary: It is investigated whether the addition of contextual information from the brain anatomy improves U-Net-based brain tumor segmentation.
The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective, robust, and automatic tools for brain tumor segmentation are
needed for the extraction of information useful in treatment planning from
magnetic resonance (MR) images. Context-aware artificial intelligence is an
emerging concept for the development of deep learning applications for
computer-aided medical image analysis. In this work, it is investigated whether
the addition of contextual information from the brain anatomy in the form of
white matter, gray matter, and cerebrospinal fluid masks and probability maps
improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used
to train and test two standard 3D U-Net models that, in addition to the
conventional MR image modalities, used the anatomical contextual information as
extra channels in the form of binary masks (CIM) or probability maps (CIP). A
baseline model (BLM) that only used the conventional MR image modalities was
also trained. The impact of adding contextual information was investigated in
terms of overall segmentation accuracy, model training time, domain
generalization, and compensation for fewer MR modalities available for each
subject. Results show that there is no statistically significant difference
when comparing Dice scores between the baseline model and the contextual
information models, even when comparing performances for high- and low-grade
tumors independently. Only in the case of compensation for fewer MR modalities
available for each subject did the addition of anatomical contextual
information significantly improve the segmentation of the whole tumor. Overall,
there is no overall significant improvement in segmentation performance when
using anatomical contextual information in the form of either binary masks or
probability maps as extra channels.
Related papers
- Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Brain Tumor Segmentation from MRI Images using Deep Learning Techniques [3.1498833540989413]
A public MRI dataset contains 3064 TI-weighted images from 233 patients with three variants of brain tumor, viz. meningioma, glioma, and pituitary tumor.
The dataset files were converted and preprocessed before indulging into the methodology which employs implementation and training of some well-known image segmentation deep learning models.
The experimental findings showed that among all the applied approaches, the recurrent residual U-Net which uses Adam reaches a Mean Intersection Over Union of 0.8665 and outperforms other compared state-of-the-art deep learning models.
arXiv Detail & Related papers (2023-04-29T13:33:21Z) - Investigating certain choices of CNN configurations for brain lesion
segmentation [5.148195106469231]
Deep learning models, in particular CNNs, have been a methodology of choice in many applications of medical image analysis including brain tumor segmentation.
We investigated the main design aspects of CNN models for the specific task of MRI-based brain tumor segmentation.
arXiv Detail & Related papers (2022-12-02T15:24:44Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities [4.855689194518905]
We propose a style matching U-Net (SMU-Net) for brain tumour segmentation on MRI images.
Our co-training approach utilizes a content and style-matching mechanism to distill the informative features from the full-modality network into a missing modality network.
Our style matching module adaptively recalibrates the representation space by learning a matching function to transfer the informative and textural features from a full-modality path into a missing-modality path.
arXiv Detail & Related papers (2022-04-06T17:55:19Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation [17.756591105686]
This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
arXiv Detail & Related papers (2020-12-12T09:09:04Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.