HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2012.06760v1
- Date: Sat, 12 Dec 2020 09:09:04 GMT
- Title: HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation
- Authors: Saqib Qamar, Parvez Ahmad, Linlin Shen
- Abstract summary: This paper proposes hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block.
Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.
- Score: 17.756591105686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The brain tumor segmentation task aims to classify tissue into the whole
tumor (WT), tumor core (TC), and enhancing tumor (ET) classes using multimodel
MRI images. Quantitative analysis of brain tumors is critical for clinical
decision making. While manual segmentation is tedious, time-consuming, and
subjective, this task is at the same time very challenging to automatic
segmentation methods. Thanks to the powerful learning ability, convolutional
neural networks (CNNs), mainly fully convolutional networks, have shown
promising brain tumor segmentation. This paper further boosts the performance
of brain tumor segmentation by proposing hyperdense inception 3D UNet (HI-Net),
which captures multi-scale information by stacking factorization of 3D weighted
convolutional layers in the residual inception block. We use hyper dense
connections among factorized convolutional layers to extract more contexual
information, with the help of features reusability. We use a dice loss function
to cope with class imbalances. We validate the proposed architecture on the
multi-modal brain tumor segmentation challenges (BRATS) 2020 testing dataset.
Preliminary results on the BRATS 2020 testing set show that achieved by our
proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457,
0.87494, and 0.83712, respectively.
Related papers
- MBDRes-U-Net: Multi-Scale Lightweight Brain Tumor Segmentation Network [0.0]
This study proposes the MBDRes-U-Net model using the three-dimensional (3D) U-Net framework, which integrates multibranch residual blocks and fused attention into the model.
The computational burden of the model is reduced by the branch strategy, which effectively uses the rich local features in multimodal images.
arXiv Detail & Related papers (2024-11-04T09:03:43Z) - Self-calibrated convolution towards glioma segmentation [45.74830585715129]
We evaluate self-calibrated convolutions in different parts of the nnU-Net network to demonstrate that self-calibrated modules in skip connections can significantly improve the enhanced-tumor and tumor-core segmentation accuracy.
arXiv Detail & Related papers (2024-02-07T19:51:13Z) - Automated Ensemble-Based Segmentation of Adult Brain Tumors: A Novel
Approach Using the BraTS AFRICA Challenge Data [0.0]
We introduce an ensemble method that comprises eleven unique variations based on three core architectures.
Our findings reveal that the ensemble approach, combining different architectures, outperforms single models.
These results underline the potential of tailored deep learning techniques in precisely segmenting brain tumors.
arXiv Detail & Related papers (2023-08-14T15:34:22Z) - Prediction of brain tumor recurrence location based on multi-modal
fusion and nonlinear correlation learning [55.789874096142285]
We present a deep learning-based brain tumor recurrence location prediction network.
We first train a multi-modal brain tumor segmentation network on the public dataset BraTS 2021.
Then, the pre-trained encoder is transferred to our private dataset for extracting the rich semantic features.
Two decoders are constructed to jointly segment the present brain tumor and predict its future tumor recurrence location.
arXiv Detail & Related papers (2023-04-11T02:45:38Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Triplet Contrastive Learning for Brain Tumor Classification [99.07846518148494]
We present a novel approach of directly learning deep embeddings for brain tumor types, which can be used for downstream tasks such as classification.
We evaluate our method on an extensive brain tumor dataset which consists of 27 different tumor classes, out of which 13 are defined as rare.
arXiv Detail & Related papers (2021-08-08T11:26:34Z) - QuickTumorNet: Fast Automatic Multi-Class Segmentation of Brain Tumors [0.0]
Manual segmentation of brain tumors from 3D MRI volumes is a time-consuming task.
Our model, QuickTumorNet, demonstrated fast, reliable, and accurate brain tumor segmentation.
arXiv Detail & Related papers (2020-12-22T23:16:43Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Context Aware 3D UNet for Brain Tumor Segmentation [24.27997192961372]
UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks.
We propose a modified UNet architecture for brain tumor segmentation.
arXiv Detail & Related papers (2020-10-25T10:32:25Z) - Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs [2.4736005621421686]
Multimodal brain tumor segmentation challenge (BraTS) brings together researchers to improve automated methods for 3D MRI brain tumor segmentation.
We evaluate the method on BraTS 2019 challenge.
arXiv Detail & Related papers (2020-01-06T07:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.