Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free
Survival using a Full-Scale UNet with Attention
- URL: http://arxiv.org/abs/2111.03848v1
- Date: Sat, 6 Nov 2021 10:28:48 GMT
- Title: Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free
Survival using a Full-Scale UNet with Attention
- Authors: Emmanuelle Bourigault, Daniel R. McGowan, Abolfazl Mehranian,
Bart{\l}omiej W. Papie\.z
- Abstract summary: The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods.
We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation.
For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features.
- Score: 0.8138288420049126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation of head and neck (H\&N) tumours and prediction of patient
outcome are crucial for patient's disease diagnosis and treatment monitoring.
Current developments of robust deep learning models are hindered by the lack of
large multi-centre, multi-modal data with quality annotations. The MICCAI 2021
HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge
creates a platform for comparing segmentation methods of the primary gross
target volume on fluoro-deoxyglucose (FDG)-PET and Computed Tomography images
and prediction of progression-free survival in H\&N oropharyngeal cancer.For
the segmentation task, we proposed a new network based on an encoder-decoder
architecture with full inter- and intra-skip connections to take advantage of
low-level and high-level semantics at full scales. Additionally, we used
Conditional Random Fields as a post-processing step to refine the predicted
segmentation maps. We trained multiple neural networks for tumor volume
segmentation, and these segmentations were ensembled achieving an average Dice
Similarity Coefficient of 0.75 in cross-validation, and 0.76 on the challenge
testing data set. For prediction of patient progression free survival task, we
propose a Cox proportional hazard regression combining clinical, radiomic, and
deep learning features. Our survival prediction model achieved a concordance
index of 0.82 in cross-validation, and 0.62 on the challenge testing data set.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - Deep Learning-Based Segmentation of Tumors in PET/CT Volumes: Benchmark of Different Architectures and Training Strategies [0.12301374769426145]
This study examines various neural network architectures and training strategies for automatically segmentation of cancer lesions.
V-Net and nnU-Net models were the most effective for their respective datasets.
Eliminating cancer-free cases from the AutoPET dataset was found to improve the performance of most models.
arXiv Detail & Related papers (2024-04-15T13:03:42Z) - Towards Tumour Graph Learning for Survival Prediction in Head & Neck
Cancer Patients [0.0]
Nearly one million new cases of head & neck cancer diagnosed worldwide in 2020.
automated segmentation and prognosis estimation approaches can help ensure each patient gets the most effective treatment.
This paper presents a framework to perform these functions on arbitrary field of view (FoV) PET and CT registered scans.
arXiv Detail & Related papers (2023-04-17T09:32:06Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - Recurrence-free Survival Prediction under the Guidance of Automatic
Gross Tumor Volume Segmentation for Head and Neck Cancers [8.598790229614071]
We developed an automated primary tumor (GTVp) and lymph nodes (GTVn) segmentation method.
We extracted radiomics features from the segmented tumor volume and constructed a multi-modality tumor recurrence-free survival (RFS) prediction model.
arXiv Detail & Related papers (2022-09-22T18:44:57Z) - TMSS: An End-to-End Transformer-based Multimodal Network for
Segmentation and Survival Prediction [0.0]
oncologists do not do this in their analysis but rather fuse the information in their brain from multiple sources such as medical images and patient history.
This work proposes a deep learning method that mimics oncologists' analytical behavior when quantifying cancer and estimating patient survival.
arXiv Detail & Related papers (2022-09-12T06:22:05Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Multi-task fusion for improving mammography screening data
classification [3.7683182861690843]
We propose a pipeline approach, where we first train a set of individual, task-specific models.
We then investigate the fusion thereof, which is in contrast to the standard model ensembling strategy.
Our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling.
arXiv Detail & Related papers (2021-12-01T13:56:27Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.