Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction
- URL: http://arxiv.org/abs/2506.03804v1
- Date: Wed, 04 Jun 2025 10:24:14 GMT
- Title: Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction
- Authors: George Webber, Alexander Hammers, Andrew P. King, Andrew J. Reader,
- Abstract summary: We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
- Score: 44.89560992517543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown improved lesion detectability and flexibility to reconstruction hyperparameters (e.g. scanner geometry or dose level) when PET images are reconstructed by leveraging pre-trained diffusion models. Such methods train a diffusion model (without sinogram data) on high-quality, but still noisy, PET images. In this work, we propose a simple method for generating subject-specific PET images from a dataset of multi-subject PET-MR scans, synthesizing "pseudo-PET" images by transforming between different patients' anatomy using image registration. The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features compared to the original set of PET images. With simulated and real [$^{18}$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data. In particular, the method shows promise in combining information from a guidance MR scan without overly imposing anatomical features, demonstrating an improved trade-off between reconstructing PET-unique image features versus features present in both PET and MR. We believe this approach for generating and utilizing synthetic data has further applications to medical imaging tasks, particularly because patient-specific PET images can be generated without resorting to generative deep learning or large training datasets.
Related papers
- PET Image Reconstruction Using Deep Diffusion Image Prior [3.1878756384085936]
We propose an anatomical prior-guided PET image reconstruction method based on diffusion models.<n>The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram.<n>Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types.
arXiv Detail & Related papers (2025-07-20T18:25:29Z) - Supervised Diffusion-Model-Based PET Image Reconstruction [44.89560992517543]
Diffusion models (DMs) have been introduced as a regularizing prior for PET image reconstruction.<n>We propose a supervised DM-based algorithm for PET reconstruction.<n>Our method enforces the non-negativity of PET's Poisson likelihood model and accommodates the wide intensity range of PET images.
arXiv Detail & Related papers (2025-06-30T16:39:50Z) - Posterior-Mean Denoising Diffusion Model for Realistic PET Image Reconstruction [0.7366405857677227]
Posterior-Mean Denoising Diffusion Model (PMDM-PET) is a novel approach that builds upon a recently established mathematical theory.<n>PMDM-PET first obtained posterior-mean PET predictions under minimum mean square error (MSE), then optimally transports the distribution of them to the ground-truth PET images distribution.<n> Experimental results demonstrate that PMDM-PET not only generates realistic PET images with possible minimum distortion and optimal perceptual quality but also outperforms five recent state-of-the-art (SOTA) DL baselines in both qualitative visual inspection and quantitative pixel-wise metrics.
arXiv Detail & Related papers (2025-03-11T15:33:50Z) - Multi-Subject Image Synthesis as a Generative Prior for Single-Subject PET Image Reconstruction [40.34650079545031]
We propose a novel method for synthesising diverse and realistic pseudo-PET images with improved signal-to-noise ratio.<n>We show how our pseudo-PET images may be exploited as a generative prior for single-subject PET image reconstruction.
arXiv Detail & Related papers (2024-12-05T16:40:33Z) - Diffusion Transformer Model With Compact Prior for Low-dose PET Reconstruction [7.320877150436869]
We propose a diffusion transformer model (DTM) guided by joint compact prior (JCP) to enhance the reconstruction quality of low-dose PET imaging.
DTM combines the powerful distribution mapping abilities of diffusion models with the capacity of transformers to capture long-range dependencies.
Our approach not only reduces radiation exposure risks but also provides a more reliable PET imaging tool for early disease detection and patient management.
arXiv Detail & Related papers (2024-07-01T03:54:43Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - Score-Based Generative Models for PET Image Reconstruction [38.72868748574543]
We propose several PET-specific adaptations of score-based generative models.
The proposed framework is developed for both 2D and 3D PET.
In addition, we provide an extension to guided reconstruction using magnetic resonance images.
arXiv Detail & Related papers (2023-08-27T19:43:43Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - Tattoo tomography: Freehand 3D photoacoustic image reconstruction with
an optical pattern [49.240017254888336]
Photoacoustic tomography (PAT) is a novel imaging technique that can resolve both morphological and functional tissue properties.
A current drawback is the limited field-of-view provided by the conventionally applied 2D probes.
We present a novel approach to 3D reconstruction of PAT data that does not require an external tracking system.
arXiv Detail & Related papers (2020-11-10T09:27:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.