Whole-body tumor segmentation of 18F -FDG PET/CT using a cascaded and
ensembled convolutional neural networks
- URL: http://arxiv.org/abs/2210.08068v1
- Date: Fri, 14 Oct 2022 19:25:56 GMT
- Title: Whole-body tumor segmentation of 18F -FDG PET/CT using a cascaded and
ensembled convolutional neural networks
- Authors: Ludovic Sibille, Xinrui Zhan, and Lei Xiang
- Abstract summary: The goal of this study was to report the performance of a deep neural network designed to automatically segment regions suspected of cancer in whole-body 18F-FDG PET/CT images.
A cascaded approach was developed where a stacked ensemble of 3D UNET CNN processed the PET/CT images at a fixed 6mm resolution.
- Score: 2.735686397209314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background: A crucial initial processing step for quantitative PET/CT
analysis is the segmentation of tumor lesions enabling accurate feature
ex-traction, tumor characterization, oncologic staging, and image-based therapy
response assessment. Manual lesion segmentation is however associated with
enormous effort and cost and is thus infeasible in clinical routine. Goal: The
goal of this study was to report the performance of a deep neural network
designed to automatically segment regions suspected of cancer in whole-body
18F-FDG PET/CT images in the context of the AutoPET challenge. Method: A
cascaded approach was developed where a stacked ensemble of 3D UNET CNN
processed the PET/CT images at a fixed 6mm resolution. A refiner network
composed of residual layers enhanced the 6mm segmentation mask to the original
resolution. Results: 930 cases were used to train the model. 50% were
histologically proven cancer patients and 50% were healthy controls. We
obtained a dice=0.68 on 84 stratified test cases. Manual and automatic
Metabolic Tumor Volume (MTV) were highly correlated (R2 = 0.969,Slope = 0.947).
Inference time was 89.7 seconds on average. Conclusion: The proposed algorithm
accurately segmented regions suspicious for cancer in whole-body 18F -FDG
PET/CT images.
Related papers
- Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - A cascaded deep network for automated tumor detection and segmentation
in clinical PET imaging of diffuse large B-cell lymphoma [0.41579653852022364]
We develop and validate a fast and efficient three-step cascaded deep learning model for automated detection and segmentation of DLBCL tumors from PET images.
Our model is more effective than a single end-to-end network for segmentation of tumors in whole-body PET images.
arXiv Detail & Related papers (2024-03-11T18:36:55Z) - Liver Tumor Screening and Diagnosis in CT with Pixel-Lesion-Patient
Network [37.931408083443074]
Pixel-Lesion-pAtient Network (PLAN) is proposed to jointly segment and classify each lesion with improved anchor queries and a foreground-enhanced sampling loss.
PLAN achieves 95% and 96% in patient-level sensitivity and specificity.
On contrast-enhanced CT, our lesion-level detection precision, recall, and classification accuracy are 92%, 89%, and 86%, outperforming widely used CNN and transformers for lesion segmentation.
arXiv Detail & Related papers (2023-07-17T06:21:45Z) - PriorNet: lesion segmentation in PET-CT including prior tumor appearance
information [0.0]
We propose a two-step approach to improve the segmentation performances of tumoral lesions in PET-CT images.
The first step generates a prior tumor appearance map from the PET-CT volumes, regarded as prior tumor information.
The second step, consisting of a standard U-Net, receives the prior tumor appearance map and PET-CT images to generate the lesion mask.
arXiv Detail & Related papers (2022-10-05T12:31:42Z) - Automatic Tumor Segmentation via False Positive Reduction Network for
Whole-Body Multi-Modal PET/CT Images [12.885308856495353]
In PET/CT image assessment, automatic tumor segmentation is an important step.
Existing methods tend to over-segment the tumor regions and include regions such as the normal high organs, inflammation, and other infections.
We introduce a false positive reduction network to overcome this limitation.
arXiv Detail & Related papers (2022-09-16T04:01:14Z) - AutoPET Challenge 2022: Step-by-Step Lesion Segmentation in Whole-body
FDG-PET/CT [0.0]
We propose a novel step-by-step 3D segmentation method to address this problem.
We achieved Dice score of 0.92, false positive volume of 0.89 and false negative volume of 0.53 on preliminary test set.
arXiv Detail & Related papers (2022-09-04T13:49:26Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Convolutional neural network based deep-learning architecture for
intraprostatic tumour contouring on PSMA PET images in patients with primary
prostate cancer [3.214308133129678]
The aim of this study was to develop a convolutional neural network (CNN) for automated segmentation of intraprostatic tumour (GTV) in PSMA-PET.
The CNN was trained on [68Ga]PSMA-PET and [18F]PSMA-PET images of 152 patients from two different institutions.
arXiv Detail & Related papers (2020-08-07T14:32:14Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Residual Attention U-Net for Automated Multi-Class Segmentation of
COVID-19 Chest CT Images [46.844349956057776]
coronavirus disease 2019 (COVID-19) has been spreading rapidly around the world and caused significant impact on the public health and economy.
There is still lack of studies on effectively quantifying the lung infection caused by COVID-19.
We propose a novel deep learning algorithm for automated segmentation of multiple COVID-19 infection regions.
arXiv Detail & Related papers (2020-04-12T16:24:59Z) - Severity Assessment of Coronavirus Disease 2019 (COVID-19) Using
Quantitative Features from Chest CT Images [54.919022945740515]
The aim of this study is to realize automatic severity assessment (non-severe or severe) of COVID-19 based on chest CT images.
A random forest (RF) model is trained to assess the severity (non-severe or severe) based on quantitative features.
Several quantitative features, which have the potential to reflect the severity of COVID-19, were revealed.
arXiv Detail & Related papers (2020-03-26T15:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.