Comprehensive Evaluation of Quantitative Measurements from Automated Deep Segmentations of PSMA PET/CT Images
- URL: http://arxiv.org/abs/2504.16237v1
- Date: Tue, 22 Apr 2025 20:03:45 GMT
- Title: Comprehensive Evaluation of Quantitative Measurements from Automated Deep Segmentations of PSMA PET/CT Images
- Authors: Obed Korshie Dzikunu, Amirhossein Toosi, Shadab Ahamed, Sara Harsini, Francois Benard, Xiaoxiao Li, Arman Rahmim,
- Abstract summary: This study performs a comprehensive evaluation of quantitative measurements as extracted from automated deep-learning-based segmentation methods.<n>We analyzed 380 prostate-specific membrane antigen (PSMA) targeted [18F]DCFPyL PET/CT scans of patients with biochemical recurrence of prostate cancer.
- Score: 15.20796234043916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study performs a comprehensive evaluation of quantitative measurements as extracted from automated deep-learning-based segmentation methods, beyond traditional Dice Similarity Coefficient assessments, focusing on six quantitative metrics, namely SUVmax, SUVmean, total lesion activity (TLA), tumor volume (TMTV), lesion count, and lesion spread. We analyzed 380 prostate-specific membrane antigen (PSMA) targeted [18F]DCFPyL PET/CT scans of patients with biochemical recurrence of prostate cancer, training deep neural networks, U-Net, Attention U-Net and SegResNet with four loss functions: Dice Loss, Dice Cross Entropy, Dice Focal Loss, and our proposed L1 weighted Dice Focal Loss (L1DFL). Evaluations indicated that Attention U-Net paired with L1DFL achieved the strongest correlation with the ground truth (concordance correlation = 0.90-0.99 for SUVmax and TLA), whereas models employing the Dice Loss and the other two compound losses, particularly with SegResNet, underperformed. Equivalence testing (TOST, alpha = 0.05, Delta = 20%) confirmed high performance for SUV metrics, lesion count and TLA, with L1DFL yielding the best performance. By contrast, tumor volume and lesion spread exhibited greater variability. Bland-Altman, Coverage Probability, and Total Deviation Index analyses further highlighted that our proposed L1DFL minimizes variability in quantification of the ground truth clinical measures. The code is publicly available at: https://github.com/ObedDzik/pca\_segment.git.
Related papers
- Adaptive Voxel-Weighted Loss Using L1 Norms in Deep Neural Networks for Detection and Segmentation of Prostate Cancer Lesions in PET/CT Images [16.92267561082044]
This study proposes a new loss function for deep neural networks, L1-weighted Dice Loss (L1DFL), towards automated detection and segmentation of metastatic prostate cancer lesions in PET/CT scans.<n>We trained two 3D convolutional neural networks, Attention U-Net and SegResNet, and arisingd the PET and CT volumes channel-wise as input.<n>The L1DFL outperformed the comparative loss functions by at least 13% on the test set.
arXiv Detail & Related papers (2025-02-04T22:45:16Z) - A Comprehensive Framework for Automated Segmentation of Perivascular Spaces in Brain MRI with the nnU-Net [37.179674347248266]
Enlargement of perivascular spaces (PVS) is common in neurodegenerative disorders.<n>There is a need for reliable PVS detection methods which are currently lacking.
arXiv Detail & Related papers (2024-11-29T09:19:57Z) - Multi-modal Evidential Fusion Network for Trustworthy PET/CT Tumor Segmentation [5.839660501978193]
In clinical settings, the quality of PET and CT images often varies significantly, leading to uncertainty in the modality information extracted by networks.
We propose a novel Multi-modal Evidential Fusion Network (MEFN), which consists of two core stages: Cross-Modal Feature Learning (CFL) and Multi-modal Trustworthy Fusion (MTF)
Our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results.
arXiv Detail & Related papers (2024-06-26T13:14:24Z) - Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Patients Using a Longitudinally-Aware Segmentation Network [7.225391135995692]
longitudinally-aware segmentation network (LAS-Net) can quantify serial PET/CT images for pediatric Hodgkin lymphoma patients.
LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2.
LAS-Net detected residual lymphoma in PET2 with an F1 score of 0.606.
arXiv Detail & Related papers (2024-04-12T17:20:57Z) - Self-calibrated convolution towards glioma segmentation [45.74830585715129]
We evaluate self-calibrated convolutions in different parts of the nnU-Net network to demonstrate that self-calibrated modules in skip connections can significantly improve the enhanced-tumor and tumor-core segmentation accuracy.
arXiv Detail & Related papers (2024-02-07T19:51:13Z) - 3D Lymphoma Segmentation on PET/CT Images via Multi-Scale Information Fusion with Cross-Attention [6.499725732124126]
This study aims to develop a precise segmentation method for diffuse large B-cell lymphoma (DLBCL) lesions.
We propose a 3D dual-branch encoder segmentation method using shifted window transformers and a Multi-Scale Information Fusion (MSIF) module.
The model was trained and validated on a dataset of 165 DLBCL patients using 5-fold cross-validation.
arXiv Detail & Related papers (2024-02-04T05:25:12Z) - Comprehensive framework for evaluation of deep neural networks in detection and quantification of lymphoma from PET/CT images: clinical insights, pitfalls, and observer agreement analyses [0.9958347059366389]
This study addresses critical gaps in automated lymphoma segmentation from PET/CT images.<n>Deep learning has been applied for lymphoma lesion segmentation, but few studies incorporate out-of-distribution testing.<n>We show that networks perform better on large, intense lesions with higher metabolic activity.
arXiv Detail & Related papers (2023-11-16T06:58:46Z) - Generalized Dice Focal Loss trained 3D Residual UNet for Automated
Lesion Segmentation in Whole-Body FDG PET/CT Images [0.4630436098920747]
We train a 3D Residual UNet using Generalized Dice Focal Loss function on the AutoPET challenge 2023 training dataset.
On the preliminary test phase, the average ensemble achieved a Dice similarity coefficient (DSC), false-positive volume (FPV) and false negative volume (FNV) of 0.5417, 0.8261 ml, and 0.2538 ml, respectively.
arXiv Detail & Related papers (2023-09-24T05:29:45Z) - Learned Local Attention Maps for Synthesising Vessel Segmentations [43.314353195417326]
We present an encoder-decoder model for synthesising segmentations of the main cerebral arteries in the circle of Willis (CoW) from only T2 MRI.
It uses learned local attention maps generated by dilating the segmentation labels, which forces the network to only extract information from the T2 MRI relevant to synthesising the CoW.
arXiv Detail & Related papers (2023-08-24T15:32:27Z) - Robust T-Loss for Medical Image Segmentation [56.524774292536264]
This paper presents a new robust loss function, the T-Loss, for medical image segmentation.
The proposed loss is based on the negative log-likelihood of the Student-t distribution and can effectively handle outliers in the data.
Our experiments show that the T-Loss outperforms traditional loss functions in terms of dice scores on two public medical datasets.
arXiv Detail & Related papers (2023-06-01T14:49:40Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation [17.623576885481747]
Lymphoma detection and segmentation from PET/CT volumes are crucial for surgical indication and radiotherapy.
We propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer.
Our method get accurate segmentation results with Dice score of 0.726, without any user interaction.
arXiv Detail & Related papers (2021-08-11T19:24:40Z) - Controlling False Positive/Negative Rates for Deep-Learning-Based
Prostate Cancer Detection on Multiparametric MR images [58.85481248101611]
We propose a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function.
Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost.
arXiv Detail & Related papers (2021-06-04T09:51:27Z) - Quantification of pulmonary involvement in COVID-19 pneumonia by means
of a cascade oftwo U-nets: training and assessment on multipledatasets using
different annotation criteria [83.83783947027392]
This study aims at exploiting Artificial intelligence (AI) for the identification, segmentation and quantification of COVID-19 pulmonary lesions.
We developed an automated analysis pipeline, the LungQuant system, based on a cascade of two U-nets.
The accuracy in predicting the CT-Severity Score (CT-SS) of the LungQuant system has been also evaluated.
arXiv Detail & Related papers (2021-05-06T10:21:28Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.