Segmentation of Pediatric Brain Tumors using a Radiologically informed, Deep Learning Cascade
- URL: http://arxiv.org/abs/2410.14020v1
- Date: Thu, 17 Oct 2024 20:46:13 GMT
- Title: Segmentation of Pediatric Brain Tumors using a Radiologically informed, Deep Learning Cascade
- Authors: Timothy Mulvany, Daniel Griffiths-King, Jan Novak, Heather Rose,
- Abstract summary: Monitoring Diffuse Intrinsic Pontine Glioma (DIPG) and Diffuse Midline Glioma (DMG) brain tumors in pediatric patients is key for assessment of treatment response.
RAPNO guidelines recommend the measurement of these tumors using MRI volumetric challenges.
Current study presents a novel adaptation of existing nnU-Net approaches for pediatric brain tumor segmentation.
- Score: 0.0
- License:
- Abstract: Monitoring of Diffuse Intrinsic Pontine Glioma (DIPG) and Diffuse Midline Glioma (DMG) brain tumors in pediatric patients is key for assessment of treatment response. Response Assessment in Pediatric Neuro-Oncology (RAPNO) guidelines recommend the volumetric measurement of these tumors using MRI. Segmentation challenges, such as the Brain Tumor Segmentation (BraTS) Challenge, promote development of automated approaches which are replicable, generalizable and accurate, to aid in these tasks. The current study presents a novel adaptation of existing nnU-Net approaches for pediatric brain tumor segmentation, submitted to the BraTS-PEDs 2024 challenge. We apply an adapted nnU-Net with hierarchical cascades to the segmentation task of the BraTS-PEDs 2024 challenge. The residual encoder variant of nnU-Net, used as our baseline model, already provides high quality segmentations. We incorporate multiple changes to the implementation of nnU-Net and devise a novel two-stage cascaded nnU-Net to segment the substructures of brain tumors from coarse to fine. Using outputs from the nnU-Net Residual Encoder (trained to segment CC, ED, ET and NET tumor labels from T1w, T1w-CE, T2w and T2-FLAIR MRI), these are passed to two additional models one classifying ET versus NET and a second classifying CC vs ED using cascade learning. We use radiological guidelines to steer which multi parametric MRI (mpMRI) to use in these cascading models. Compared to a default nnU-Net and an ensembled nnU-net as baseline approaches, our novel method provides robust segmentations for the BraTS-PEDs 2024 challenge, achieving mean Dice scores of 0.657, 0.904, 0.703, and 0.967, and HD95 of 76.2, 10.1, 111.0, and 12.3 for the ET, NET, CC and ED, respectively.
Related papers
- Multi-Layer Feature Fusion with Cross-Channel Attention-Based U-Net for Kidney Tumor Segmentation [0.0]
U-Net based deep learning techniques are emerging as a promising approach for automated medical image segmentation.
We present an improved U-Net based model for end-to-end automated semantic segmentation of CT scan images to identify renal tumors.
arXiv Detail & Related papers (2024-10-20T19:02:41Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - CKD-TransBTS: Clinical Knowledge-Driven Hybrid Transformer with
Modality-Correlated Cross-Attention for Brain Tumor Segmentation [37.39921484146194]
Brain tumor segmentation in magnetic resonance image (MRI) is crucial for brain tumor diagnosis, cancer management and research purposes.
With the great success of the ten-year BraTS challenges, a lot of outstanding BTS models have been proposed to tackle the difficulties of BTS in different technical aspects.
We propose a clinical knowledge-driven brain tumor segmentation model, called CKD-TransBTS.
arXiv Detail & Related papers (2022-07-15T09:35:29Z) - HNF-Netv2 for Brain Tumor Segmentation using multi-modal MR Imaging [86.52489226518955]
We extend our HNF-Net to HNF-Netv2 by adding inter-scale and intra-scale semantic discrimination enhancing blocks.
Our method won the RSNA 2021 Brain Tumor AI Challenge Prize (Segmentation Task)
arXiv Detail & Related papers (2022-02-10T06:34:32Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Covariance Self-Attention Dual Path UNet for Rectal Tumor Segmentation [5.161531917413708]
We propose a Covariance Self-Attention Dual Path UNet (CSA-DPUNet) to increase the capability of extracting enough feature information for rectal tumor segmentation.
Experiments show that CSA-DPUNet brings 15.31%, 7.2%, 7.2%, 11.8%, and 9.5% improvement in Dice coefficient, P, R, F1, respectively.
arXiv Detail & Related papers (2020-11-04T08:01:19Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - DeepSeg: Deep Neural Network Framework for Automatic Brain Tumor
Segmentation using Magnetic Resonance FLAIR Images [0.0]
Gliomas are the most common and aggressive type of brain tumors.
Fluid-Attenuated Inversion Recovery (FLAIR) MRI can provide the physician with information about tumor infiltration.
This paper proposes a new generic deep learning architecture; namely DeepSeg for fully automated detection and segmentation of the brain lesion.
arXiv Detail & Related papers (2020-04-26T09:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.