Multimodal Deep Learning to Differentiate Tumor Recurrence from
Treatment Effect in Human Glioblastoma
- URL: http://arxiv.org/abs/2302.14124v1
- Date: Mon, 27 Feb 2023 20:12:28 GMT
- Title: Multimodal Deep Learning to Differentiate Tumor Recurrence from
Treatment Effect in Human Glioblastoma
- Authors: Tonmoy Hossain, Zoraiz Qureshi, Nivetha Jayakumar, Thomas Eluvathingal
Muttikkal, Sohil Patel, David Schiff, Miaomiao Zhang and Bijoy Kundu
- Abstract summary: Differentiating tumor progression (TP) from treatment-related necrosis (TN) is critical for clinical management decisions in glioblastoma (GBM)
dPET includes novel methods of a model-corrected blood input function that accounts for partial volume averaging to compute parametric maps that reveal kinetic information.
CNN was trained to predict classification accuracy between TP and TN for $35$ brain tumors from $26$ subjects in the PET-MR image space.
- Score: 2.726462580631231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiating tumor progression (TP) from treatment-related necrosis (TN)
is critical for clinical management decisions in glioblastoma (GBM). Dynamic
FDG PET (dPET), an advance from traditional static FDG PET, may prove
advantageous in clinical staging. dPET includes novel methods of a
model-corrected blood input function that accounts for partial volume averaging
to compute parametric maps that reveal kinetic information. In a preliminary
study, a convolution neural network (CNN) was trained to predict classification
accuracy between TP and TN for $35$ brain tumors from $26$ subjects in the
PET-MR image space. 3D parametric PET Ki (from dPET), traditional static PET
standardized uptake values (SUV), and also the brain tumor MR voxels formed the
input for the CNN. The average test accuracy across all leave-one-out
cross-validation iterations adjusting for class weights was $0.56$ using only
the MR, $0.65$ using only the SUV, and $0.71$ using only the Ki voxels.
Combining SUV and MR voxels increased the test accuracy to $0.62$. On the other
hand, MR and Ki voxels increased the test accuracy to $0.74$. Thus, dPET
features alone or with MR features in deep learning models would enhance
prediction accuracy in differentiating TP vs TN in GBM.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Towards AI Lesion Tracking in PET/CT Imaging: A Siamese-based CNN Pipeline applied on PSMA PET/CT Scans [2.3432822395081807]
This work introduces a Siamese CNN approach for lesion tracking between PET/CT scans.
Our algorithm extracts suitable lesion patches and forwards them into a Siamese CNN trained to classify the lesion patch pairs as corresponding or non-corresponding lesions.
Experiments have been performed with different input patch types and a Siamese network in 2D and 3D.
arXiv Detail & Related papers (2024-06-13T17:06:15Z) - Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Patients Using a Longitudinally-Aware Segmentation Network [7.225391135995692]
longitudinally-aware segmentation network (LAS-Net) can quantify serial PET/CT images for pediatric Hodgkin lymphoma patients.
LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2.
LAS-Net detected residual lymphoma in PET2 with an F1 score of 0.606.
arXiv Detail & Related papers (2024-04-12T17:20:57Z) - Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks [40.986069119392944]
We propose MX-ARM, a multimodal MiXture-of-experts Alignment Reconstruction and Model.
It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities.
arXiv Detail & Related papers (2024-03-29T08:47:49Z) - Score-Based Generative Models for PET Image Reconstruction [38.72868748574543]
We propose several PET-specific adaptations of score-based generative models.
The proposed framework is developed for both 2D and 3D PET.
In addition, we provide an extension to guided reconstruction using magnetic resonance images.
arXiv Detail & Related papers (2023-08-27T19:43:43Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Improved automated lesion segmentation in whole-body FDG/PET-CT via
Test-Time Augmentation [5.206955554317389]
Oncology indications have extensively quantified metabolically active tumors using positron emission tomography (PET) and computed tomography (CT)
In this study, we investigate the potential benefits of test-time augmentation for segmenting tumors from PET-CT pairings.
We train U-Net and Swin U-Netr on the training database to determine how different test time augmentation improved segmentation performance.
arXiv Detail & Related papers (2022-10-14T12:50:59Z) - Automatic Tumor Segmentation via False Positive Reduction Network for
Whole-Body Multi-Modal PET/CT Images [12.885308856495353]
In PET/CT image assessment, automatic tumor segmentation is an important step.
Existing methods tend to over-segment the tumor regions and include regions such as the normal high organs, inflammation, and other infections.
We introduce a false positive reduction network to overcome this limitation.
arXiv Detail & Related papers (2022-09-16T04:01:14Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Synthetic PET via Domain Translation of 3D MRI [1.0052333944678682]
We use a dataset of 56 $18$F-FDG-PET/MRI exams to train a 3D residual UNet to predict physiologic PET uptake from whole-body T1-weighted MRI.
The predicted PET images are forward projected to produce synthetic PET time-of-flight sinograms that can be used with vendor-provided PET reconstruction algorithms.
arXiv Detail & Related papers (2022-06-11T21:32:40Z) - Controlling False Positive/Negative Rates for Deep-Learning-Based
Prostate Cancer Detection on Multiparametric MR images [58.85481248101611]
We propose a novel PCa detection network that incorporates a lesion-level cost-sensitive loss and an additional slice-level loss based on a lesion-to-slice mapping function.
Our experiments based on 290 clinical patients concludes that 1) The lesion-level FNR was effectively reduced from 0.19 to 0.10 and the lesion-level FPR was reduced from 1.03 to 0.66 by changing the lesion-level cost.
arXiv Detail & Related papers (2021-06-04T09:51:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.