Language-Enhanced Generative Modeling for PET Synthesis from MRI and Blood Biomarkers
- URL: http://arxiv.org/abs/2511.02206v1
- Date: Tue, 04 Nov 2025 02:53:25 GMT
- Title: Language-Enhanced Generative Modeling for PET Synthesis from MRI and Blood Biomarkers
- Authors: Zhengjie Zhang, Xiaoxie Mao, Qihao Guo, Shaoting Zhang, Qi Huang, Mu Zhou, Fang Xie, Mianxin Liu,
- Abstract summary: Alzheimer's disease diagnosis heavily relies on amyloid-beta positron emission tomography (Abeta-PET)<n>This study explores whether Abeta-PET spatial patterns can be predicted from blood-based biomarkers (BBMs) and MRI scans.
- Score: 19.691395767168633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background: Alzheimer's disease (AD) diagnosis heavily relies on amyloid-beta positron emission tomography (Abeta-PET), which is limited by high cost and limited accessibility. This study explores whether Abeta-PET spatial patterns can be predicted from blood-based biomarkers (BBMs) and MRI scans. Methods: We collected Abeta-PET images, T1-weighted MRI scans, and BBMs from 566 participants. A language-enhanced generative model, driven by a large language model (LLM) and multimodal information fusion, was developed to synthesize PET images. Synthesized images were evaluated for image quality, diagnostic consistency, and clinical applicability within a fully automated diagnostic pipeline. Findings: The synthetic PET images closely resemble real PET scans in both structural details (SSIM = 0.920 +/- 0.003) and regional patterns (Pearson's r = 0.955 +/- 0.007). Diagnostic outcomes using synthetic PET show high agreement with real PET-based diagnoses (accuracy = 0.80). Using synthetic PET, we developed a fully automatic AD diagnostic pipeline integrating PET synthesis and classification. The synthetic PET-based model (AUC = 0.78) outperforms T1-based (AUC = 0.68) and BBM-based (AUC = 0.73) models, while combining synthetic PET and BBMs further improved performance (AUC = 0.79). Ablation analysis supports the advantages of LLM integration and prompt engineering. Interpretation: Our language-enhanced generative model synthesizes realistic PET images, enhancing the utility of MRI and BBMs for Abeta spatial pattern assessment and improving the diagnostic workflow for Alzheimer's disease.
Related papers
- Unveiling and Bridging the Functional Perception Gap in MLLMs: Atomic Visual Alignment and Hierarchical Evaluation via PET-Bench [48.60251555171943]
Multimodal Large Language Models (MLLMs) have demonstrated remarkable proficiency in tasks such as abnormality detection and report generation for anatomical modalities.<n>In this work, we quantify a fundamental functional perception gap: the inability of current vision encoders to decode functional tracer biodistribution independent of morphological priors.<n>We introduce PET-Bench, the first large-scale functional imaging benchmark comprising 52,308 hierarchical QA pairs from 9,732 multi-site, multi-tracer PET studies.<n>Our results demonstrate that AVA effectively bridges the perception gap, transforming CoT from a source of hallucination into a robust inference tool and improving diagnostic
arXiv Detail & Related papers (2026-01-06T05:58:50Z) - MCR-VQGAN: A Scalable and Cost-Effective Tau PET Synthesis Approach for Alzheimer's Disease Imaging [4.705825869364371]
We propose MCR-VQGAN to synthesize high-fidelity tau PET images from structural T1-weighted MRI scans.<n>Using 222 paired structural T1-weighted MRI and tau PET scans from Alzheimer's Disease Neuroimaging Initiative (ADNI), we trained and compared MCR-VQGAN with cGAN, WGAN-GP, CycleGAN, and VQGAN.<n>Our results demonstrate that MCR-VQGAN can offer a reliable and scalable surrogate for conventional tau PET imaging.
arXiv Detail & Related papers (2025-12-17T20:22:15Z) - TauGenNet: Plasma-Driven Tau PET Image Synthesis via Text-Guided 3D Diffusion Models [6.674230585698143]
We propose a text-guided 3D diffusion model for 3D tau PET image synthesis, leveraging multimodal conditions from both structural MRI and plasma measurement.<n> Experimental results demonstrate that our approach can generate realistic, clinically meaningful 3D tau PET across a range of disease stages.
arXiv Detail & Related papers (2025-09-04T14:45:50Z) - PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution [60.970656010712275]
We propose a combination of diffusion-based generation (PanoDiff) and Super-Resolution (SR) for generating synthetic dental panoramic radiographs (PRs)<n>The former generates a low-resolution (LR) seed of a PR which is then processed by the SR model to yield a high-resolution (HR) PR of size 1024 X 512.<n>For SR, we propose a state-of-the-art transformer that learns local-global relationships, resulting in sharper edges and textures.
arXiv Detail & Related papers (2025-07-12T09:52:10Z) - Supervised Diffusion-Model-Based PET Image Reconstruction [44.89560992517543]
Diffusion models (DMs) have been introduced as a regularizing prior for PET image reconstruction.<n>We propose a supervised DM-based algorithm for PET reconstruction.<n>Our method enforces the non-negativity of PET's Poisson likelihood model and accommodates the wide intensity range of PET images.
arXiv Detail & Related papers (2025-06-30T16:39:50Z) - Score-based Generative Diffusion Models to Synthesize Full-dose FDG Brain PET from MRI in Epilepsy Patients [2.8588393332510913]
Fluorodeoxyglucose (FDG) PET to evaluate patients with epilepsy is one of the most common applications for simultaneous PET/MRI.<n>Here we compared the performance of diffusion- and non-diffusion-based deep learning models for the MRI-to-PET image translation task.
arXiv Detail & Related papers (2025-06-12T20:57:02Z) - Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction [40.722159771726375]
We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
arXiv Detail & Related papers (2025-06-04T10:24:14Z) - Synthesizing beta-amyloid PET images from T1-weighted Structural MRI: A Preliminary Study [6.4038303148510005]
We propose an approach to utilize 3D diffusion models to synthesize A$beta$-PET images from T1-weighted MRI scans.
Our method generates high-quality A$beta$-PET images for cognitive normal cases, although it is less effective for mild cognitive impairment (MCI) patients.
arXiv Detail & Related papers (2024-09-26T20:51:59Z) - PASTA: Pathology-Aware MRI to PET Cross-Modal Translation with Diffusion Models [7.6672160690646445]
We introduce PASTA, a novel pathology-aware image translation framework based on conditional diffusion models.
A cycle exchange consistency and volumetric generation strategy elevate PASTA's capability to produce high-quality 3D PET scans.
For Alzheimer's classification, the performance of synthesized scans improves over MRI by 4%, almost reaching the performance of actual PET.
arXiv Detail & Related papers (2024-05-27T08:33:24Z) - Three-Dimensional Amyloid-Beta PET Synthesis from Structural MRI with Conditional Generative Adversarial Networks [45.426889188365685]
Alzheimer's Disease hallmarks include amyloid-beta deposits and brain atrophy.
PET is expensive, invasive and exposes patients to ionizing radiation.
MRI is cheaper, non-invasive, and free from ionizing radiation but limited to measuring brain atrophy.
arXiv Detail & Related papers (2024-05-03T14:10:29Z) - Amyloid-Beta Axial Plane PET Synthesis from Structural MRI: An Image
Translation Approach for Screening Alzheimer's Disease [49.62561299282114]
An image translation model is implemented to produce synthetic amyloid-beta PET images from structural MRI that are quantitatively accurate.
We found that the synthetic PET images could be produced with a high degree of similarity to truth in terms of shape, contrast and overall high SSIM and PSNR.
arXiv Detail & Related papers (2023-09-01T16:26:42Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.