Prediction of 5-year Progression-Free Survival in Advanced
Nasopharyngeal Carcinoma with Pretreatment PET/CT using Multi-Modality Deep
Learning-based Radiomics
- URL: http://arxiv.org/abs/2103.05220v1
- Date: Tue, 9 Mar 2021 04:43:33 GMT
- Title: Prediction of 5-year Progression-Free Survival in Advanced
Nasopharyngeal Carcinoma with Pretreatment PET/CT using Multi-Modality Deep
Learning-based Radiomics
- Authors: Bingxin Gu, Mingyuan Meng, Lei Bi, Jinman Kim, David Dagan Feng, and
Shaoli Song
- Abstract summary: We developed an end-to-end multi-modality DLR model to predict 5-year Progression-Free Survival in advanced NPC.
A 3D Convolutional Neural Network (CNN) was optimized to extract deep features from pretreatment multi-modality PET/CT images.
Our study identified potential radiomics-based prognostic model for survival prediction in advanced NPC, and suggests that DLR could serve as a tool for aiding in cancer management.
- Score: 15.386240118882569
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep Learning-based Radiomics (DLR) has achieved great success on medical
image analysis. In this study, we aim to explore the capability of DLR for
survival prediction in NPC. We developed an end-to-end multi-modality DLR model
using pretreatment PET/CT images to predict 5-year Progression-Free Survival
(PFS) in advanced NPC. A total of 170 patients with pathological confirmed
advanced NPC (TNM stage III or IVa) were enrolled in this study. A 3D
Convolutional Neural Network (CNN), with two branches to process PET and CT
separately, was optimized to extract deep features from pretreatment
multi-modality PET/CT images and use the derived features to predict the
probability of 5-year PFS. Optionally, TNM stage, as a high-level clinical
feature, can be integrated into our DLR model to further improve prognostic
performance. For a comparison between CR and DLR, 1456 handcrafted features
were extracted, and three top CR methods were selected as benchmarks from 54
combinations of 6 feature selection methods and 9 classification methods.
Compared to the three CR methods, our multi-modality DLR models using both PET
and CT, with or without TNM stage (named PCT or PC model), resulted in the
highest prognostic performance. Furthermore, the multi-modality PCT model
outperformed single-modality DLR models using only PET and TNM stage (PT model)
or only CT and TNM stage (CT model). Our study identified potential
radiomics-based prognostic model for survival prediction in advanced NPC, and
suggests that DLR could serve as a tool for aiding in cancer management.
Related papers
- Breast Cancer Neoadjuvant Chemotherapy Treatment Response Prediction Using Aligned Longitudinal MRI and Clinical Data [6.850780131537867]
The goal is to develop machine learning models to predict pathologic complete response (PCR binary classification) and 5-year relapse-free survival status (RFS binary classification)<n>The proposed framework includes tumour segmentation, image registration, feature extraction, and predictive modelling.<n>The proposed image registration-based feature extraction consistently improves the predictive models.
arXiv Detail & Related papers (2025-12-19T16:32:31Z) - Supervised Diffusion-Model-Based PET Image Reconstruction [44.89560992517543]
Diffusion models (DMs) have been introduced as a regularizing prior for PET image reconstruction.<n>We propose a supervised DM-based algorithm for PET reconstruction.<n>Our method enforces the non-negativity of PET's Poisson likelihood model and accommodates the wide intensity range of PET images.
arXiv Detail & Related papers (2025-06-30T16:39:50Z) - Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction [44.89560992517543]
We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
arXiv Detail & Related papers (2025-06-04T10:24:14Z) - Efficient Parameter Adaptation for Multi-Modal Medical Image Segmentation and Prognosis [4.5445892770974154]
We propose a parameter-efficient multi-modal adaptation (PEMMA) framework for lightweight upgrading of a transformer-based segmentation model.
Our method achieves comparable performance to early fusion, but with only 8% of the trainable parameters, and demonstrates a significant +28% Dice score improvement on PET scans when trained with a single modality.
arXiv Detail & Related papers (2025-04-18T11:52:21Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Multi-modal Evidential Fusion Network for Trustworthy PET/CT Tumor Segmentation [5.839660501978193]
In clinical settings, the quality of PET and CT images often varies significantly, leading to uncertainty in the modality information extracted by networks.
We propose a novel Multi-modal Evidential Fusion Network (MEFN), which consists of two core stages: Cross-Modal Feature Learning (CFL) and Multi-modal Trustworthy Fusion (MTF)
Our model can provide radiologists with credible uncertainty of the segmentation results for their decision in accepting or rejecting the automatic segmentation results.
arXiv Detail & Related papers (2024-06-26T13:14:24Z) - 2.5D Multi-view Averaging Diffusion Model for 3D Medical Image Translation: Application to Low-count PET Reconstruction with CT-less Attenuation Correction [17.897681480967087]
Positron Emission Tomography (PET) is an important clinical imaging tool but inevitably introduces radiation hazards to patients and healthcare providers.
It is desirable to develop 3D methods to translate the non-attenuation-corrected low-dose PET into attenuation-corrected standard-dose PET.
Recent diffusion models have emerged as a new state-of-the-art deep learning method for image-to-image translation, better than traditional CNN-based methods.
We developed a novel 2.5D Multi-view Averaging Diffusion Model (MADM) for 3D image-to-image translation with application on NAC
arXiv Detail & Related papers (2024-06-12T16:22:41Z) - Functional Imaging Constrained Diffusion for Brain PET Synthesis from Structural MRI [5.190302448685122]
We propose a framework for 3D brain PET image synthesis with paired structural MRI as input condition, through a new constrained diffusion model (CDM)
The FICD introduces noise to PET and then progressively removes it with CDM, ensuring high output fidelity throughout a stable training phase.
The CDM learns to predict denoised PET with a functional imaging constraint introduced to ensure voxel-wise alignment between each denoised PET and its ground truth.
arXiv Detail & Related papers (2024-05-03T22:33:46Z) - PEMMA: Parameter-Efficient Multi-Modal Adaptation for Medical Image Segmentation [5.056996354878645]
When both CT and PET scans are available, it is common to combine them as two channels of the input to the segmentation model.
This method requires both scan types during training and inference, posing a challenge due to the limited availability of PET scans.
We propose a parameter-efficient multi-modal adaptation framework for lightweight upgrading of a transformer-based segmentation model.
arXiv Detail & Related papers (2024-04-21T16:29:49Z) - Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks [40.986069119392944]
We propose MX-ARM, a multimodal MiXture-of-experts Alignment Reconstruction and Model.
It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities.
arXiv Detail & Related papers (2024-03-29T08:47:49Z) - Head and Neck Tumor Segmentation from [18F]F-FDG PET/CT Images Based on 3D Diffusion Model [2.4512350526408704]
Head and neck (H&N) cancers are among the most prevalent types of cancer worldwide.
Recently, the diffusion model has demonstrated remarkable performance in various image-generation tasks.
arXiv Detail & Related papers (2024-01-31T04:34:31Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification [42.75911994044675]
We present a novel approach for unpaired image-to-image translation of prostate MRIs and an uncertainty-aware training approach for classifying clinically significant PCa.
Our approach involves a novel pipeline for translating unpaired 3.0T multi-parametric prostate MRIs to 1.5T, thereby augmenting the available training data.
Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work.
arXiv Detail & Related papers (2023-07-02T05:26:54Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients
with Advanced Nasopharyngeal Carcinoma using Pretreatment PET/CT [15.386240118882569]
Nasopharyngeal Carcinoma (NPC) is a worldwide malignant epithelial cancer.
Deep learning has been introduced to the survival prediction in various cancers including NPC.
In this study, we introduced the concept of multi-task leaning into deep survival models to address the overfitting problem resulted from small data.
arXiv Detail & Related papers (2021-09-16T04:12:59Z) - Deep Implicit Statistical Shape Models for 3D Medical Image Delineation [47.78425002879612]
3D delineation of anatomical structures is a cardinal goal in medical imaging analysis.
Prior to deep learning, statistical shape models that imposed anatomical constraints and produced high quality surfaces were a core technology.
We present deep implicit statistical shape models (DISSMs), a new approach to delineation that marries the representation power of CNNs with the robustness of SSMs.
arXiv Detail & Related papers (2021-04-07T01:15:06Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.