Two Stage Segmentation of Cervical Tumors using PocketNet
- URL: http://arxiv.org/abs/2409.11456v1
- Date: Tue, 17 Sep 2024 17:48:12 GMT
- Title: Two Stage Segmentation of Cervical Tumors using PocketNet
- Authors: Awj Twam, Megan Jacobsen, Rachel Glenn, Ann Klopp, Aradhana M. Venkatesan, David Fuentes,
- Abstract summary: This work applied a novel deep-learning model (PocketNet) to segment the cervix, vagina, uterus, and tumor(s) on T2w MRI.
PocketNet achieved a mean Dice-Sorensen similarity coefficient (DSC) exceeding 70% for tumor segmentation and 80% for organ segmentation.
- Score: 0.32985979395737786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cervical cancer remains the fourth most common malignancy amongst women worldwide.1 Concurrent chemoradiotherapy (CRT) serves as the mainstay definitive treatment regimen for locally advanced cervical cancers and includes external beam radiation followed by brachytherapy.2 Integral to radiotherapy treatment planning is the routine contouring of both the target tumor at the level of the cervix, associated gynecologic anatomy and the adjacent organs at risk (OARs). However, manual contouring of these structures is both time and labor intensive and associated with known interobserver variability that can impact treatment outcomes. While multiple tools have been developed to automatically segment OARs and the high-risk clinical tumor volume (HR-CTV) using computed tomography (CT) images,3,4,5,6 the development of deep learning-based tumor segmentation tools using routine T2-weighted (T2w) magnetic resonance imaging (MRI) addresses an unmet clinical need to improve the routine contouring of both anatomical structures and cervical cancers, thereby increasing quality and consistency of radiotherapy planning. This work applied a novel deep-learning model (PocketNet) to segment the cervix, vagina, uterus, and tumor(s) on T2w MRI. The performance of the PocketNet architecture was evaluated, when trained on data via 5-fold cross validation. PocketNet achieved a mean Dice-Sorensen similarity coefficient (DSC) exceeding 70% for tumor segmentation and 80% for organ segmentation. These results suggest that PocketNet is robust to variations in contrast protocols, providing reliable segmentation of the ROIs.
Related papers
- UMambaAdj: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and nnU-Net ResEnc Planner [0.04924932828166548]
Magnetic Resonance Imaging (MRI) plays a crucial role in adaptive radiotherapy for head and neck cancer (HNC) due to its superior soft-tissue contrast.
accurately segmenting the gross tumor volume (GTV), which includes both the primary tumor (GTVp) and lymph nodes (GTVn) remains challenging.
Recently, two deep learning segmentation innovations have shown great promise: UMamba, which effectively captures long-range dependencies, and the nnU-Net Residual (ResEnc) which enhances feature extraction through multistage residual blocks.
arXiv Detail & Related papers (2024-10-16T18:26:27Z) - Evaluating the Impact of Sequence Combinations on Breast Tumor Segmentation in Multiparametric MRI [0.0]
The effect of sequence combinations in mpMRI remains under-investigated.
The nnU-Net model using DCE sequences achieved a Dice similarity coefficient (DSC) of 0.69 $pm$ 0.18 for functional tumor volume (FTV) segmentation.
arXiv Detail & Related papers (2024-06-12T02:09:05Z) - Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy Planning Automated Segmentation [47.119513326344126]
The BraTS-MEN-RT challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs.
Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space.
Target volume annotations adhere to established radiotherapy planning protocols.
arXiv Detail & Related papers (2024-05-28T17:25:43Z) - The 2024 Brain Tumor Segmentation (BraTS) Challenge: Glioma Segmentation on Post-treatment MRI [5.725734864357991]
The 2024 Brain Tumor (BraTS) challenge on post-treatment glioma MRI will provide a community standard and benchmark for state-of-the-art automated segmentation models.
Challenge competitors will develop automated segmentation models to predict four distinct tumor sub-regions.
Models will be evaluated on separate validation and test datasets.
arXiv Detail & Related papers (2024-05-28T17:07:55Z) - Segmentation of glioblastomas in early post-operative multi-modal MRI
with deep neural networks [33.51490233427579]
Two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task.
The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy.
The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.
arXiv Detail & Related papers (2023-04-18T10:14:45Z) - Segmentation of Planning Target Volume in CT Series for Total Marrow
Irradiation Using U-Net [0.0]
We present a deep learning-based auto-contouring method for segmenting Planning Target Volume (PTV) for TMLI treatment using the U-Net architecture.
Our findings are a preliminary but significant step towards developing a segmentation model that has the potential to save radiation oncologists a considerable amount of time.
arXiv Detail & Related papers (2023-04-05T10:40:37Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.