Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS
2022 Challenge Solution
- URL: http://arxiv.org/abs/2212.09310v1
- Date: Mon, 19 Dec 2022 09:14:23 GMT
- Title: Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS
2022 Challenge Solution
- Authors: Ramy A. Zeineldin, Mohamed E. Karar, Oliver Burgert, Franziska
Mathis-Ullrich
- Abstract summary: This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge.
We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI.
It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic segmentation is essential for the brain tumor diagnosis, disease
prognosis, and follow-up therapy of patients with gliomas. Still, accurate
detection of gliomas and their sub-regions in multimodal MRI is very
challenging due to the variety of scanners and imaging protocols. Over the last
years, the BraTS Challenge has provided a large number of multi-institutional
MRI scans as a benchmark for glioma segmentation algorithms. This paper
describes our contribution to the BraTS 2022 Continuous Evaluation challenge.
We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg,
nnU-Net, and DeepSCAN for automatic glioma boundaries detection in
pre-operative MRI. It is worth noting that our ensemble models took first place
in the final evaluation on the BraTS testing dataset with Dice scores of
0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05,
for the whole tumor, tumor core, and enhancing tumor, respectively.
Furthermore, the proposed ensemble method ranked first in the final ranking on
another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean
Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the
whole tumor, tumor core, and enhancing tumor, respectively. The docker image
for the winning submission is publicly available at
(https://hub.docker.com/r/razeineldin/camed22).
Related papers
- Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Advanced Tumor Segmentation in Medical Imaging: An Ensemble Approach for BraTS 2023 Adult Glioma and Pediatric Tumor Tasks [0.8184931154670512]
This study outlines our methodology for segmenting tumors in the context of two distinct tasks from the BraTS 2023 challenge: Adult Glioma and Pediatric Tumors.
Our approach leverages two encoder-decoder-based CNN models, namely SegResNet and MedNeXt, for segmenting three distinct subregions of tumors.
Our proposed approach achieves third place in the BraTS 2023 Adult Glioma Challenges with an average of 0.8313 and 36.38 Dice and HD95 scores on the test set, respectively.
arXiv Detail & Related papers (2024-03-14T10:37:41Z) - HNF-Netv2 for Brain Tumor Segmentation using multi-modal MR Imaging [86.52489226518955]
We extend our HNF-Net to HNF-Netv2 by adding inter-scale and intra-scale semantic discrimination enhancing blocks.
Our method won the RSNA 2021 Brain Tumor AI Challenge Prize (Segmentation Task)
arXiv Detail & Related papers (2022-02-10T06:34:32Z) - Lung-Originated Tumor Segmentation from Computed Tomography Scan (LOTUS)
Benchmark [48.30502612686276]
Lung cancer is one of the deadliest cancers, and its effective diagnosis and treatment depend on the accurate delineation of the tumor.
Human-centered segmentation, which is currently the most common approach, is subject to inter-observer variability.
The 2018 VIP Cup started with a global engagement from 42 countries to access the competition data.
In a nutshell, all the algorithms proposed during the competition, are based on deep learning models combined with a false positive reduction technique.
arXiv Detail & Related papers (2022-01-03T03:06:38Z) - Ensemble CNN Networks for GBM Tumors Segmentation using Multi-parametric
MRI [0.0]
We propose a new aggregation of two deep learning frameworks namely, DeepSeg and nnU-Net for automatic glioblastoma recognition in pre-operative mpMRI.
Our ensemble method obtains Dice similarity scores of 92.00, 87.33, and 84.10 and Hausdorff Distances of 3.81, 8.91, and 16.02 for the enhancing tumor, tumor core, and whole tumor regions.
arXiv Detail & Related papers (2021-12-13T10:51:20Z) - Multi-stage Deep Layer Aggregation for Brain Tumor Segmentation [2.324913904215885]
The architecture consists of a cascade of three Deep Layer Aggregation neural networks, where each stage elaborates the response using the feature maps and the probabilities of the previous stage.
The neuroimaging data are part of the publicly available Brain Tumor (BraTS) 2020 challenge dataset.
In the Test set, the experimental results achieved a Dice score of 0.8858, 0.8297 and 0.7900, with an Hausdorff Distance of 5.32 mm, 22.32 mm and 20.44 mm for the whole tumor, core tumor and enhanced tumor, respectively.
arXiv Detail & Related papers (2021-01-02T17:59:30Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Automatic Brain Tumor Segmentation with Scale Attention Network [1.7767466724342065]
Multimodal Brain Tumor Challenge 2020 (BraTS 2020) provides a common platform for comparing different automatic algorithms on multi-parametric Magnetic Resonance Imaging (mpMRI)
We propose a dynamic scale attention mechanism that incorporates low-level details with high-level semantics from feature maps at different scales.
Our framework was trained using the 369 challenge training cases provided by BraTS 2020, and achieved an average Dice Similarity Coefficient (DSC) of 0.8828, 0.8433 and 0.8177, as well as 95% Hausdorff distance (in millimeter) of 5.2176, 17.9697 and 13.4298 on 166 testing cases for whole tumor
arXiv Detail & Related papers (2020-11-06T04:45:49Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.