Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-guided Radiotherapy
- URL: http://arxiv.org/abs/2411.14752v1
- Date: Fri, 22 Nov 2024 06:16:56 GMT
- Title: Comparative Analysis of nnUNet and MedNeXt for Head and Neck Tumor Segmentation in MRI-guided Radiotherapy
- Authors: Nikoo Moradi, André Ferreira, Behrus Puladi, Jens Kleesiek, Emad Fatemizadeh, Gijs Luijten, Victor Alves, Jan Egger,
- Abstract summary: We present our solution as team TUMOR to the HNTS-MRG24 MICCAI Challenge.
It is focused on automated segmentation of primary gross tumor volumes (GTVp) and metastatic lymph node gross tumor volume (GTVn) in pre-RT and mid-RT MRI images.
Our solution for Task 1 achieved 1st place in the final test phase with an aggregated Dice Similarity Coefficient of 0.8254, and our solution for Task 2 ranked 8th with a score of 0.7005.
- Score: 2.13048414569993
- License:
- Abstract: Radiation therapy (RT) is essential in treating head and neck cancer (HNC), with magnetic resonance imaging(MRI)-guided RT offering superior soft tissue contrast and functional imaging. However, manual tumor segmentation is time-consuming and complex, and therfore remains a challenge. In this study, we present our solution as team TUMOR to the HNTS-MRG24 MICCAI Challenge which is focused on automated segmentation of primary gross tumor volumes (GTVp) and metastatic lymph node gross tumor volume (GTVn) in pre-RT and mid-RT MRI images. We utilized the HNTS-MRG2024 dataset, which consists of 150 MRI scans from patients diagnosed with HNC, including original and registered pre-RT and mid-RT T2-weighted images with corresponding segmentation masks for GTVp and GTVn. We employed two state-of-the-art models in deep learning, nnUNet and MedNeXt. For Task 1, we pretrained models on pre-RT registered and mid-RT images, followed by fine-tuning on original pre-RT images. For Task 2, we combined registered pre-RT images, registered pre-RT segmentation masks, and mid-RT data as a multi-channel input for training. Our solution for Task 1 achieved 1st place in the final test phase with an aggregated Dice Similarity Coefficient of 0.8254, and our solution for Task 2 ranked 8th with a score of 0.7005. The proposed solution is publicly available at Github Repository.
Related papers
- Gradient Map-Assisted Head and Neck Tumor Segmentation: A Pre-RT to Mid-RT Approach in MRI-Guided Radiotherapy [0.04590531202809992]
This study investigates the use of pre-RT tumor regions and local gradient maps to enhance mid-RT tumor segmentation for head and neck cancer.
A gradient map of the tumor region from the pre-RT image is computed and applied to mid-RT images to improve tumor boundary delineation.
arXiv Detail & Related papers (2024-10-16T18:26:51Z) - UMambaAdj: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and nnU-Net ResEnc Planner [0.04924932828166548]
Magnetic Resonance Imaging (MRI) plays a crucial role in adaptive radiotherapy for head and neck cancer (HNC) due to its superior soft-tissue contrast.
accurately segmenting the gross tumor volume (GTV), which includes both the primary tumor (GTVp) and lymph nodes (GTVn) remains challenging.
Recently, two deep learning segmentation innovations have shown great promise: UMamba, which effectively captures long-range dependencies, and the nnU-Net Residual (ResEnc) which enhances feature extraction through multistage residual blocks.
arXiv Detail & Related papers (2024-10-16T18:26:27Z) - Intraoperative Glioma Segmentation with YOLO + SAM for Improved Accuracy in Tumor Resection [1.9461727843485295]
Gliomas present significant surgical challenges due to similarity to healthy tissue.
MRI images are often ineffective during surgery due to factors such as brain shift.
This paper presents a deep learning pipeline combining You Only Look Once Version 8 (Yv8) and Segment Anything Model Vision Transformer-base.
arXiv Detail & Related papers (2024-08-27T07:58:08Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Parotid Gland MRI Segmentation Based on Swin-Unet and Multimodal Images [7.934520786027202]
Parotid gland tumors account for approximately 2% to 10% of head and neck tumors.
Deep learning methods have developed rapidly, especially Transformer beats the traditional convolutional neural network in computer vision.
The DSC of the model on the test set was 88.63%, MPA was 99.31%, MIoU was 83.99%, and HD was 3.04.
arXiv Detail & Related papers (2022-06-07T14:20:53Z) - A Novel Mask R-CNN Model to Segment Heterogeneous Brain Tumors through
Image Subtraction [0.0]
We propose using a method performed by radiologists called image segmentation and applying it to machine learning models to prove a better segmentation.
Using Mask R-CNN, its ResNet backbone being pre-trained on the RSNA pneumonia detection challenge dataset, we can train a model on the Brats 2020 Brain Tumor dataset.
We can see how well the method of image subtraction works by comparing it to models without image subtraction through DICE coefficient (F1 score), recall, and precision on the untouched test set.
arXiv Detail & Related papers (2022-04-04T01:45:11Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - HNF-Netv2 for Brain Tumor Segmentation using multi-modal MR Imaging [86.52489226518955]
We extend our HNF-Net to HNF-Netv2 by adding inter-scale and intra-scale semantic discrimination enhancing blocks.
Our method won the RSNA 2021 Brain Tumor AI Challenge Prize (Segmentation Task)
arXiv Detail & Related papers (2022-02-10T06:34:32Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.