GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models
- URL: http://arxiv.org/abs/2505.07364v1
- Date: Mon, 12 May 2025 09:00:03 GMT
- Title: GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models
- Authors: Daria Zotova, Nicolas Pinon, Robin Trombetta, Romain Bouet, Julien Jung, Carole Lartizien,
- Abstract summary: We design and compare different GAN-based frameworks for generating synthetic brain [18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data.<n>We explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model.<n>Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models.
- Score: 1.1259214300411828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background and Objective. Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multimodality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models. Method. We design and compare different GAN-based frameworks for generating synthetic brain [18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery. Results. The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and in distribution (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity. Conclusion. Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.
Related papers
- PET Image Reconstruction Using Deep Diffusion Image Prior [3.1878756384085936]
We propose an anatomical prior-guided PET image reconstruction method based on diffusion models.<n>The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram.<n>Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types.
arXiv Detail & Related papers (2025-07-20T18:25:29Z) - Supervised Diffusion-Model-Based PET Image Reconstruction [44.89560992517543]
Diffusion models (DMs) have been introduced as a regularizing prior for PET image reconstruction.<n>We propose a supervised DM-based algorithm for PET reconstruction.<n>Our method enforces the non-negativity of PET's Poisson likelihood model and accommodates the wide intensity range of PET images.
arXiv Detail & Related papers (2025-06-30T16:39:50Z) - Score-based Generative Diffusion Models to Synthesize Full-dose FDG Brain PET from MRI in Epilepsy Patients [2.8588393332510913]
Fluorodeoxyglucose (FDG) PET to evaluate patients with epilepsy is one of the most common applications for simultaneous PET/MRI.<n>Here we compared the performance of diffusion- and non-diffusion-based deep learning models for the MRI-to-PET image translation task.
arXiv Detail & Related papers (2025-06-12T20:57:02Z) - Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction [44.89560992517543]
We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
arXiv Detail & Related papers (2025-06-04T10:24:14Z) - Cascaded 3D Diffusion Models for Whole-body 3D 18-F FDG PET/CT synthesis from Demographics [13.016275337899895]
We propose a cascaded 3D diffusion model framework to synthesize high-fidelity 3D PET/CT volumes directly from demographic variables.<n>An initial score-based diffusion model synthesizes low-resolution PET/CT volumes from demographic variables alone.<n>This is followed by a super-resolution residual diffusion model that refines spatial resolution.
arXiv Detail & Related papers (2025-05-28T15:38:33Z) - Synthetic CT Generation from Time-of-Flight Non-Attenutaion-Corrected PET for Whole-Body PET Attenuation Correction [6.062988223565465]
This study presents a deep learning approach to generate synthetic CT (sCT) images directly from Time-of-Flight non-attenuation corrected (NAC) PET images.<n>We first evaluated models pre-trained on large-scale natural image datasets for a CT-to-CT reconstruction task.<n>Visual assessments demonstrated improved reconstruction of both bone and soft tissue structures from TOF NAC PET images.
arXiv Detail & Related papers (2025-04-10T04:49:41Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.<n>We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - AutoPET Challenge: Tumour Synthesis for Data Augmentation [26.236831356731017]
We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
arXiv Detail & Related papers (2024-09-12T14:23:19Z) - Functional Imaging Constrained Diffusion for Brain PET Synthesis from Structural MRI [5.190302448685122]
We propose a framework for 3D brain PET image synthesis with paired structural MRI as input condition, through a new constrained diffusion model (CDM)
The FICD introduces noise to PET and then progressively removes it with CDM, ensuring high output fidelity throughout a stable training phase.
The CDM learns to predict denoised PET with a functional imaging constraint introduced to ensure voxel-wise alignment between each denoised PET and its ground truth.
arXiv Detail & Related papers (2024-05-03T22:33:46Z) - Multibranch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction [42.95604565673447]
This paper presents a novel approach for learned synergistic reconstruction of medical images using multibranch generative models.<n>We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.