Enhancing Generalized Fetal Brain MRI Segmentation using A Cascade Network with Depth-wise Separable Convolution and Attention Mechanism
- URL: http://arxiv.org/abs/2405.15205v1
- Date: Fri, 24 May 2024 04:23:22 GMT
- Title: Enhancing Generalized Fetal Brain MRI Segmentation using A Cascade Network with Depth-wise Separable Convolution and Attention Mechanism
- Authors: Zhigao Cai, Xing-Ming Zhao,
- Abstract summary: We propose a novel cascade network called CasUNext to enhance the accuracy and generalization of fetal brain MRI segmentation.
We evaluate CasUNext on 150 fetal MRI scans between 20 to 36 weeks from two scanners made by Philips and Siemens.
Results demonstrate that CasUNext achieves improved segmentation performance compared to U-Nets and other state-of-the-art approaches.
- Score: 2.2252684361733293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic segmentation of the fetal brain is still challenging due to the health state of fetal development, motion artifacts, and variability across gestational ages, since existing methods rely on high-quality datasets of healthy fetuses. In this work, we propose a novel cascade network called CasUNext to enhance the accuracy and generalization of fetal brain MRI segmentation. CasUNext incorporates depth-wise separable convolution, attention mechanisms, and a two-step cascade architecture for efficient high-precision segmentation. The first network localizes the fetal brain region, while the second network focuses on detailed segmentation. We evaluate CasUNext on 150 fetal MRI scans between 20 to 36 weeks from two scanners made by Philips and Siemens including axial, coronal, and sagittal views, and also validated on a dataset of 50 abnormal fetuses. Results demonstrate that CasUNext achieves improved segmentation performance compared to U-Nets and other state-of-the-art approaches. It obtains an average Dice coefficient of 96.1% and mean intersection over union of 95.9% across diverse scenarios. CasUNext shows promising capabilities for handling the challenges of multi-view fetal MRI and abnormal cases, which could facilitate various quantitative analyses and apply to multi-site data.
Related papers
- Fetal-BET: Brain Extraction Tool for Fetal MRI [4.214523989654048]
We build a large annotated dataset of approximately 72,000 2D fetal brain MRI images.
Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures.
Our approach leverages the rich information from multi-contrast (multi-sequence) fetal MRI data, enabling precise delineation of the fetal brain structures.
arXiv Detail & Related papers (2023-10-02T18:14:23Z) - Tissue Segmentation of Thick-Slice Fetal Brain MR Scans with Guidance
from High-Quality Isotropic Volumes [52.242103848335354]
We propose a novel Cycle-Consistent Domain Adaptation Network (C2DA-Net) to efficiently transfer the knowledge learned from high-quality isotropic volumes for accurate tissue segmentation of thick-slice scans.
Our C2DA-Net can fully utilize a small set of annotated isotropic volumes to guide tissue segmentation on unannotated thick-slice scans.
arXiv Detail & Related papers (2023-08-13T12:51:15Z) - CAS-Net: Conditional Atlas Generation and Brain Segmentation for Fetal
MRI [10.127399319119911]
We propose a novel network structure that can simultaneously generate conditional atlases and predict brain tissue segmentation.
The proposed method is trained and evaluated on 253 subjects from the developing Human Connectome Project.
arXiv Detail & Related papers (2022-05-17T11:23:02Z) - Deep Learning Framework for Real-time Fetal Brain Segmentation in MRI [15.530500862944818]
We analyze the speed-accuracy performance of a variety of deep neural network models.
We devised a symbolically small convolutional neural network that combines spatial details at high resolution with context features extracted at lower resolutions.
We trained our model as well as eight alternative, state-of-the-art networks with manually-labeled fetal brain MRI slices.
arXiv Detail & Related papers (2022-05-02T20:43:14Z) - RCA-IUnet: A residual cross-spatial attention guided inception U-Net
model for tumor segmentation in breast ultrasound imaging [0.6091702876917281]
The article introduces an efficient residual cross-spatial attention guided inception U-Net (RCA-IUnet) model with minimal training parameters for tumor segmentation.
The RCA-IUnet model follows U-Net topology with residual inception depth-wise separable convolution and hybrid pooling layers.
Cross-spatial attention filters are added to suppress the irrelevant features and focus on the target structure.
arXiv Detail & Related papers (2021-08-05T10:35:06Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.