UMambaAdj: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and nnU-Net ResEnc Planner
- URL: http://arxiv.org/abs/2410.12940v1
- Date: Wed, 16 Oct 2024 18:26:27 GMT
- Title: UMambaAdj: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and nnU-Net ResEnc Planner
- Authors: Jintao Ren, Kim Hochreuter, Jesper Folsted Kallehauge, Stine Sofia Korreman,
- Abstract summary: Magnetic Resonance Imaging (MRI) plays a crucial role in adaptive radiotherapy for head and neck cancer (HNC) due to its superior soft-tissue contrast.
accurately segmenting the gross tumor volume (GTV), which includes both the primary tumor (GTVp) and lymph nodes (GTVn) remains challenging.
Recently, two deep learning segmentation innovations have shown great promise: UMamba, which effectively captures long-range dependencies, and the nnU-Net Residual (ResEnc) which enhances feature extraction through multistage residual blocks.
- Score: 0.04924932828166548
- License:
- Abstract: Magnetic Resonance Imaging (MRI) plays a crucial role in MRI-guided adaptive radiotherapy for head and neck cancer (HNC) due to its superior soft-tissue contrast. However, accurately segmenting the gross tumor volume (GTV), which includes both the primary tumor (GTVp) and lymph nodes (GTVn), remains challenging. Recently, two deep learning segmentation innovations have shown great promise: UMamba, which effectively captures long-range dependencies, and the nnU-Net Residual Encoder (ResEnc), which enhances feature extraction through multistage residual blocks. In this study, we integrate these strengths into a novel approach, termed 'UMambaAdj'. Our proposed method was evaluated on the HNTS-MRG 2024 challenge test set using pre-RT T2-weighted MRI images, achieving an aggregated Dice Similarity Coefficient (DSCagg) of 0.751 for GTVp and 0.842 for GTVn, with a mean DSCagg of 0.796. This approach demonstrates potential for more precise tumor delineation in MRI-guided adaptive radiotherapy, ultimately improving treatment outcomes for HNC patients. Team: DCPT-Stine's group.
Related papers
- Gradient Map-Assisted Head and Neck Tumor Segmentation: A Pre-RT to Mid-RT Approach in MRI-Guided Radiotherapy [0.04590531202809992]
This study investigates the use of pre-RT tumor regions and local gradient maps to enhance mid-RT tumor segmentation for head and neck cancer.
A gradient map of the tumor region from the pre-RT image is computed and applied to mid-RT images to improve tumor boundary delineation.
arXiv Detail & Related papers (2024-10-16T18:26:51Z) - Two Stage Segmentation of Cervical Tumors using PocketNet [0.32985979395737786]
This work applied a novel deep-learning model (PocketNet) to segment the cervix, vagina, uterus, and tumor(s) on T2w MRI.
PocketNet achieved a mean Dice-Sorensen similarity coefficient (DSC) exceeding 70% for tumor segmentation and 80% for organ segmentation.
arXiv Detail & Related papers (2024-09-17T17:48:12Z) - Evaluating the Impact of Sequence Combinations on Breast Tumor Segmentation in Multiparametric MRI [0.0]
The effect of sequence combinations in mpMRI remains under-investigated.
The nnU-Net model using DCE sequences achieved a Dice similarity coefficient (DSC) of 0.69 $pm$ 0.18 for functional tumor volume (FTV) segmentation.
arXiv Detail & Related papers (2024-06-12T02:09:05Z) - The 2024 Brain Tumor Segmentation (BraTS) Challenge: Glioma Segmentation on Post-treatment MRI [5.725734864357991]
The 2024 Brain Tumor (BraTS) challenge on post-treatment glioma MRI will provide a community standard and benchmark for state-of-the-art automated segmentation models.
Challenge competitors will develop automated segmentation models to predict four distinct tumor sub-regions.
Models will be evaluated on separate validation and test datasets.
arXiv Detail & Related papers (2024-05-28T17:07:55Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Recurrence-free Survival Prediction under the Guidance of Automatic
Gross Tumor Volume Segmentation for Head and Neck Cancers [8.598790229614071]
We developed an automated primary tumor (GTVp) and lymph nodes (GTVn) segmentation method.
We extracted radiomics features from the segmented tumor volume and constructed a multi-modality tumor recurrence-free survival (RFS) prediction model.
arXiv Detail & Related papers (2022-09-22T18:44:57Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - HNF-Netv2 for Brain Tumor Segmentation using multi-modal MR Imaging [86.52489226518955]
We extend our HNF-Net to HNF-Netv2 by adding inter-scale and intra-scale semantic discrimination enhancing blocks.
Our method won the RSNA 2021 Brain Tumor AI Challenge Prize (Segmentation Task)
arXiv Detail & Related papers (2022-02-10T06:34:32Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.