Convolutional neural network based deep-learning architecture for
intraprostatic tumour contouring on PSMA PET images in patients with primary
prostate cancer
- URL: http://arxiv.org/abs/2008.03201v1
- Date: Fri, 7 Aug 2020 14:32:14 GMT
- Title: Convolutional neural network based deep-learning architecture for
intraprostatic tumour contouring on PSMA PET images in patients with primary
prostate cancer
- Authors: Dejan Kostyszyn, Tobias Fechter, Nico Bartl, Anca L. Grosu, Christian
Gratzke, August Sigle, Michael Mix, Juri Ruf, Thomas F. Fassbender, Selina
Kiefer, Alisa S. Bettermann, Nils H. Nicolay, Simon Spohn, Maria U. Kramer,
Peter Bronsert, Hongqian Guo, Xuefeng Qiu, Feng Wang, Christoph Henkenberens,
Rudolf A. Werner, Dimos Baltas, Philipp T. Meyer, Thorsten Derlin, Mengxia
Chen, Constantinos Zamboglou
- Abstract summary: The aim of this study was to develop a convolutional neural network (CNN) for automated segmentation of intraprostatic tumour (GTV) in PSMA-PET.
The CNN was trained on [68Ga]PSMA-PET and [18F]PSMA-PET images of 152 patients from two different institutions.
- Score: 3.214308133129678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate delineation of the intraprostatic gross tumour volume (GTV) is a
prerequisite for treatment approaches in patients with primary prostate cancer
(PCa). Prostate-specific membrane antigen positron emission tomography
(PSMA-PET) may outperform MRI in GTV detection. However, visual GTV delineation
underlies interobserver heterogeneity and is time consuming. The aim of this
study was to develop a convolutional neural network (CNN) for automated
segmentation of intraprostatic tumour (GTV-CNN) in PSMA-PET.
Methods: The CNN (3D U-Net) was trained on [68Ga]PSMA-PET images of 152
patients from two different institutions and the training labels were generated
manually using a validated technique. The CNN was tested on two independent
internal (cohort 1: [68Ga]PSMA-PET, n=18 and cohort 2: [18F]PSMA-PET, n=19) and
one external (cohort 3: [68Ga]PSMA-PET, n=20) test-datasets. Accordance between
manual contours and GTV-CNN was assessed with Dice-S{\o}rensen coefficient
(DSC). Sensitivity and specificity were calculated for the two internal
test-datasets by using whole-mount histology.
Results: Median DSCs for cohorts 1-3 were 0.84 (range: 0.32-0.95), 0.81
(range: 0.28-0.93) and 0.83 (range: 0.32-0.93), respectively. Sensitivities and
specificities for GTV-CNN were comparable with manual expert contours: 0.98 and
0.76 (cohort 1) and 1 and 0.57 (cohort 2), respectively. Computation time was
around 6 seconds for a standard dataset.
Conclusion: The application of a CNN for automated contouring of
intraprostatic GTV in [68Ga]PSMA- and [18F]PSMA-PET images resulted in a high
concordance with expert contours and in high sensitivities and specificities in
comparison with histology reference. This robust, accurate and fast technique
may be implemented for treatment concepts in primary PCa. The trained model and
the study's source code are available in an open source repository.
Related papers
- Towards AI Lesion Tracking in PET/CT Imaging: A Siamese-based CNN Pipeline applied on PSMA PET/CT Scans [2.3432822395081807]
This work introduces a Siamese CNN approach for lesion tracking between PET/CT scans.
Our algorithm extracts suitable lesion patches and forwards them into a Siamese CNN trained to classify the lesion patch pairs as corresponding or non-corresponding lesions.
Experiments have been performed with different input patch types and a Siamese network in 2D and 3D.
arXiv Detail & Related papers (2024-06-13T17:06:15Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Vision Transformer-based Multimodal Feature Fusion Network for Lymphoma
Segmentation on PET/CT Images [6.715992297496958]
We aim to develop an accurate method for lymphoma segmentation with 18F-Fluorodeoxyglucose positron emission tomography (PET) and computed tomography (CT) images.
Our lymphoma segmentation approach combines a vision transformer with dual encoders, adeptly fusing PET and CT data via multimodal cross-attention fusion (MMCAF) module.
arXiv Detail & Related papers (2024-02-04T05:25:12Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - An open-source deep learning algorithm for efficient and fully-automatic
analysis of the choroid in optical coherence tomography [3.951995351344523]
We develop an open-source, fully-automatic deep learning algorithm, DeepGPET, for choroid region segmentation in optical coherence tomography ( OCT) data.
arXiv Detail & Related papers (2023-07-03T10:01:36Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Multimodal Deep Learning to Differentiate Tumor Recurrence from
Treatment Effect in Human Glioblastoma [2.726462580631231]
Differentiating tumor progression (TP) from treatment-related necrosis (TN) is critical for clinical management decisions in glioblastoma (GBM)
dPET includes novel methods of a model-corrected blood input function that accounts for partial volume averaging to compute parametric maps that reveal kinetic information.
CNN was trained to predict classification accuracy between TP and TN for $35$ brain tumors from $26$ subjects in the PET-MR image space.
arXiv Detail & Related papers (2023-02-27T20:12:28Z) - Whole-body tumor segmentation of 18F -FDG PET/CT using a cascaded and
ensembled convolutional neural networks [2.735686397209314]
The goal of this study was to report the performance of a deep neural network designed to automatically segment regions suspected of cancer in whole-body 18F-FDG PET/CT images.
A cascaded approach was developed where a stacked ensemble of 3D UNET CNN processed the PET/CT images at a fixed 6mm resolution.
arXiv Detail & Related papers (2022-10-14T19:25:56Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z) - Machine Learning Automatically Detects COVID-19 using Chest CTs in a
Large Multicenter Cohort [43.99203831722203]
Our retrospective study obtained 2096 chest CTs from 16 institutions.
A metric-based approach for classification of COVID-19 used interpretable features.
A deep learning-based classifier differentiated COVID-19 via 3D features extracted from CT attenuation and probability distribution of airspace opacities.
arXiv Detail & Related papers (2020-06-09T00:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.