Score-based Generative Diffusion Models to Synthesize Full-dose FDG Brain PET from MRI in Epilepsy Patients
- URL: http://arxiv.org/abs/2506.11297v2
- Date: Sun, 29 Jun 2025 05:10:46 GMT
- Title: Score-based Generative Diffusion Models to Synthesize Full-dose FDG Brain PET from MRI in Epilepsy Patients
- Authors: Jiaqi Wu, Jiahong Ouyang, Farshad Moradi, Mohammad Mehdi Khalighi, Greg Zaharchuk,
- Abstract summary: Fluorodeoxyglucose (FDG) PET to evaluate patients with epilepsy is one of the most common applications for simultaneous PET/MRI.<n>Here we compared the performance of diffusion- and non-diffusion-based deep learning models for the MRI-to-PET image translation task.
- Score: 2.8588393332510913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fluorodeoxyglucose (FDG) PET to evaluate patients with epilepsy is one of the most common applications for simultaneous PET/MRI, given the need to image both brain structure and metabolism, but is suboptimal due to the radiation dose in this young population. Little work has been done synthesizing diagnostic quality PET images from MRI data or MRI data with ultralow-dose PET using advanced generative AI methods, such as diffusion models, with attention to clinical evaluations tailored for the epilepsy population. Here we compared the performance of diffusion- and non-diffusion-based deep learning models for the MRI-to-PET image translation task for epilepsy imaging using simultaneous PET/MRI in 52 subjects (40 train/2 validate/10 hold-out test). We tested three different models: 2 score-based generative diffusion models (SGM-Karras Diffusion [SGM-KD] and SGM-variance preserving [SGM-VP]) and a Transformer-Unet. We report results on standard image processing metrics as well as clinically relevant metrics, including congruency measures (Congruence Index and Congruency Mean Absolute Error) that assess hemispheric metabolic asymmetry, which is a key part of the clinical analysis of these images. The SGM-KD produced the best qualitative and quantitative results when synthesizing PET purely from T1w and T2 FLAIR images with the least mean absolute error in whole-brain specific uptake value ratio (SUVR) and highest intraclass correlation coefficient. When 1% low-dose PET images are included in the inputs, all models improve significantly and are interchangeable for quantitative performance and visual quality. In summary, SGMs hold great potential for pure MRI-to-PET translation, while all 3 model types can synthesize full-dose FDG-PET accurately using MRI and ultralow-dose PET.
Related papers
- Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction [44.89560992517543]
We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
arXiv Detail & Related papers (2025-06-04T10:24:14Z) - GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models [1.1259214300411828]
We design and compare different GAN-based frameworks for generating synthetic brain [18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data.<n>We explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model.<n>Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models.
arXiv Detail & Related papers (2025-05-12T09:00:03Z) - Synthesizing beta-amyloid PET images from T1-weighted Structural MRI: A Preliminary Study [6.4038303148510005]
We propose an approach to utilize 3D diffusion models to synthesize A$beta$-PET images from T1-weighted MRI scans.
Our method generates high-quality A$beta$-PET images for cognitive normal cases, although it is less effective for mild cognitive impairment (MCI) patients.
arXiv Detail & Related papers (2024-09-26T20:51:59Z) - Functional Imaging Constrained Diffusion for Brain PET Synthesis from Structural MRI [5.190302448685122]
We propose a framework for 3D brain PET image synthesis with paired structural MRI as input condition, through a new constrained diffusion model (CDM)
The FICD introduces noise to PET and then progressively removes it with CDM, ensuring high output fidelity throughout a stable training phase.
The CDM learns to predict denoised PET with a functional imaging constraint introduced to ensure voxel-wise alignment between each denoised PET and its ground truth.
arXiv Detail & Related papers (2024-05-03T22:33:46Z) - Three-Dimensional Amyloid-Beta PET Synthesis from Structural MRI with Conditional Generative Adversarial Networks [45.426889188365685]
Alzheimer's Disease hallmarks include amyloid-beta deposits and brain atrophy.
PET is expensive, invasive and exposes patients to ionizing radiation.
MRI is cheaper, non-invasive, and free from ionizing radiation but limited to measuring brain atrophy.
arXiv Detail & Related papers (2024-05-03T14:10:29Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - Amyloid-Beta Axial Plane PET Synthesis from Structural MRI: An Image
Translation Approach for Screening Alzheimer's Disease [49.62561299282114]
An image translation model is implemented to produce synthetic amyloid-beta PET images from structural MRI that are quantitatively accurate.
We found that the synthetic PET images could be produced with a high degree of similarity to truth in terms of shape, contrast and overall high SSIM and PSNR.
arXiv Detail & Related papers (2023-09-01T16:26:42Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Synthesizing PET images from High-field and Ultra-high-field MR images Using Joint Diffusion Attention Model [18.106861006893524]
PET scanning is costly and involves radioactive exposure, resulting in a lack of PET.
Ultra-high-field imaging has proven valuable in both clinical and academic settings.
We propose a method for synthetic PET from high-filed MRI and ultra-high-field MRI.
arXiv Detail & Related papers (2023-05-06T02:41:03Z) - Negligible effect of brain MRI data preprocessing for tumor segmentation [36.89606202543839]
We conduct experiments on three publicly available datasets and evaluate the effect of different preprocessing steps in deep neural networks.
Our results demonstrate that most popular standardization steps add no value to the network performance.
We suggest that image intensity normalization approaches do not contribute to model accuracy because of the reduction of signal variance with image standardization.
arXiv Detail & Related papers (2022-04-11T17:29:36Z) - A resource-efficient deep learning framework for low-dose brain PET
image reconstruction and analysis [13.713286047709982]
We propose a resource-efficient deep learning framework for L-PET reconstruction and analysis, referred to as transGAN-SDAM.
The transGAN generates higher quality F-PET images, and then the SDAM integrates the spatial information of a sequence of generated F-PET slices to synthesize whole-brain F-PET images.
arXiv Detail & Related papers (2022-02-14T08:40:19Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.