Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation
- URL: http://arxiv.org/abs/2108.06772v1
- Date: Sun, 15 Aug 2021 16:04:09 GMT
- Title: Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation
- Authors: Daniel E. Cahall, Ghulam Rasool, Nidhal C. Bouaynaya and Hassan M.
Fathallah-Shaykh
- Abstract summary: We propose a new end-to-end brain tumor segmentation architecture based on U-Net.
Our proposed model performed significantly better than the state-of-the-art U-Net-based model for tumor core and whole tumor segmentation.
- Score: 0.9176056742068814
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Magnetic resonance imaging (MRI) is routinely used for brain tumor diagnosis,
treatment planning, and post-treatment surveillance. Recently, various models
based on deep neural networks have been proposed for the pixel-level
segmentation of tumors in brain MRIs. However, the structural variations,
spatial dissimilarities, and intensity inhomogeneity in MRIs make segmentation
a challenging task. We propose a new end-to-end brain tumor segmentation
architecture based on U-Net that integrates Inception modules and dilated
convolutions into its contracting and expanding paths. This allows us to
extract local structural as well as global contextual information. We performed
segmentation of glioma sub-regions, including tumor core, enhancing tumor, and
whole tumor using Brain Tumor Segmentation (BraTS) 2018 dataset. Our proposed
model performed significantly better than the state-of-the-art U-Net-based
model ($p<0.05$) for tumor core and whole tumor segmentation.
Related papers
- Hybrid Multihead Attentive Unet-3D for Brain Tumor Segmentation [0.0]
Brain tumor segmentation is a critical task in medical image analysis, aiding in the diagnosis and treatment planning of brain tumor patients.
Various deep learning-based techniques have made significant progress in this field, however, they still face limitations in terms of accuracy due to the complex and variable nature of brain tumor morphology.
We propose a novel Hybrid Multihead Attentive U-Net architecture, to address the challenges in accurate brain tumor segmentation.
arXiv Detail & Related papers (2024-05-22T02:46:26Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Fully Automated Tumor Segmentation for Brain MRI data using Multiplanner
UNet [0.29998889086656577]
This study evaluates the efficacy of the Multi-Planner U-Net (MPUnet) approach in segmenting different tumor subregions across three challenging datasets.
arXiv Detail & Related papers (2024-01-12T10:46:19Z) - Brain tumor multi classification and segmentation in MRI images using
deep learning [3.1248717814228923]
The classification model is based on the EfficientNetB1 architecture and is trained to classify images into four classes: meningioma, glioma, pituitary adenoma, and no tumor.
The segmentation model is based on the U-Net architecture and is trained to accurately segment the tumor from the MRI images.
arXiv Detail & Related papers (2023-04-20T01:32:55Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Brain Tumor Segmentation Network Using Attention-based Fusion and
Spatial Relationship Constraint [19.094164029068462]
We develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images.
We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs 2020)
arXiv Detail & Related papers (2020-10-29T14:51:10Z) - Scale-Space Autoencoders for Unsupervised Anomaly Segmentation in Brain
MRI [47.26574993639482]
We show improved anomaly segmentation performance and the general capability to obtain much more crisp reconstructions of input data at native resolution.
The modeling of the laplacian pyramid further enables the delineation and aggregation of lesions at multiple scales.
arXiv Detail & Related papers (2020-06-23T09:20:42Z) - Region of Interest Identification for Brain Tumors in Magnetic Resonance
Images [8.75217589103206]
We propose a fast, automated method, with light computational complexity, to find the smallest bounding box around the tumor region.
This region-of-interest can be used as a preprocessing step in training networks for subregion tumor segmentation.
The proposed method is evaluated on the BraTS 2015 dataset, and the average gained DICE score is 0.73, which is an acceptable result for this application.
arXiv Detail & Related papers (2020-02-26T14:10:40Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.