Segmentation and Risk Score Prediction of Head and Neck Cancers in
PET/CT Volumes with 3D U-Net and Cox Proportional Hazard Neural Networks
- URL: http://arxiv.org/abs/2202.07823v1
- Date: Wed, 16 Feb 2022 01:59:33 GMT
- Title: Segmentation and Risk Score Prediction of Head and Neck Cancers in
PET/CT Volumes with 3D U-Net and Cox Proportional Hazard Neural Networks
- Authors: Fereshteh Yousefirizi, Ian Janzen, Natalia Dubljevic, Yueh-En Liu,
Chloe Hill, Calum MacAulay, Arman Rahmim
- Abstract summary: We used a 3D nnU-Net model with residual layers supplemented by squeeze and excitation (SE) normalization for tumor segmentation from PET/CT images.
A hazard risk prediction model (CoxCC) was trained on a number of PET/CT radiomic features extracted from the segmented lesions.
A 10-fold cross-validated CoxCC model resulted in a c-index validation score of 0.89, and a c-index score 0.61 on the HECKTOR challenge test dataset.
- Score: 0.4433315630787158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We utilized a 3D nnU-Net model with residual layers supplemented by squeeze
and excitation (SE) normalization for tumor segmentation from PET/CT images
provided by the Head and Neck Tumor segmentation chal-lenge (HECKTOR). Our
proposed loss function incorporates the Unified Fo-cal and Mumford-Shah losses
to take the advantage of distribution, region, and boundary-based loss
functions. The results of leave-one-out-center-cross-validation performed on
different centers showed a segmentation performance of 0.82 average Dice score
(DSC) and 3.16 median Hausdorff Distance (HD), and our results on the test set
achieved 0.77 DSC and 3.01 HD. Following lesion segmentation, we proposed
training a case-control proportional hazard Cox model with an MLP neural net
backbone to predict the hazard risk score for each discrete lesion. This hazard
risk prediction model (CoxCC) was to be trained on a number of PET/CT radiomic
features extracted from the segmented lesions, patient and lesion demographics,
and encoder features provided from the penultimate layer of a multi-input 2D
PET/CT convolutional neural network tasked with predicting time-to-event for
each lesion. A 10-fold cross-validated CoxCC model resulted in a c-index
validation score of 0.89, and a c-index score of 0.61 on the HECKTOR challenge
test dataset.
Related papers
- From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - Towards Tumour Graph Learning for Survival Prediction in Head & Neck
Cancer Patients [0.0]
Nearly one million new cases of head & neck cancer diagnosed worldwide in 2020.
automated segmentation and prognosis estimation approaches can help ensure each patient gets the most effective treatment.
This paper presents a framework to perform these functions on arbitrary field of view (FoV) PET and CT registered scans.
arXiv Detail & Related papers (2023-04-17T09:32:06Z) - Slice-by-slice deep learning aided oropharyngeal cancer segmentation
with adaptive thresholding for spatial uncertainty on FDG PET and CT images [0.0]
Tumor segmentation is a fundamental step for radiotherapy treatment planning.
This study proposes a novel automatic deep learning (DL) model to assist radiation oncologists in a slice-by-slice GTVp segmentation.
arXiv Detail & Related papers (2022-07-04T15:17:44Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free
Survival using a Full-Scale UNet with Attention [0.8138288420049126]
The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods.
We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation.
For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features.
arXiv Detail & Related papers (2021-11-06T10:28:48Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Controlling False Positive/Negative Rates for Deep-Learning-Based
Prostate Cancer Detection on Multiparametric MR images [58.85481248101611]
We propose a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function.
Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost.
arXiv Detail & Related papers (2021-06-04T09:51:27Z) - Combining CNN and Hybrid Active Contours for Head and Neck Tumor
Segmentation in CT and PET images [16.76087435628378]
We propose an automatic segmentation method for head and neck tumors based on the combination of convolutional neural networks (CNNs) and hybrid active contours.
Our method ranked second place in the MICCAI 2020 HECKTOR challenge with average Dice Similarity Coefficient, precision, and recall of 0.752, 0.838, and 0.717, respectively.
arXiv Detail & Related papers (2020-12-28T12:12:14Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z) - Lung Infection Quantification of COVID-19 in CT Images with Deep
Learning [41.35413216175024]
Deep learning system developed to automatically quantify infection regions of interest.
Human-in-the-loop strategy adopted to assist radiologists for infection region segmentation.
arXiv Detail & Related papers (2020-03-10T11:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.