Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction
- URL: http://arxiv.org/abs/2308.10157v1
- Date: Sun, 20 Aug 2023 04:10:36 GMT
- Title: Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction
- Authors: Zeyu Han, Yuhan Wang, Luping Zhou, Peng Wang, Binyu Yan, Jiliu Zhou,
Yan Wang, Dinggang Shen
- Abstract summary: This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
- Score: 62.29541106695824
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: To obtain high-quality positron emission tomography (PET) scans while
reducing radiation exposure to the human body, various approaches have been
proposed to reconstruct standard-dose PET (SPET) images from low-dose PET
(LPET) images. One widely adopted technique is the generative adversarial
networks (GANs), yet recently, diffusion probabilistic models (DPMs) have
emerged as a compelling alternative due to their improved sample quality and
higher log-likelihood scores compared to GANs. Despite this, DPMs suffer from
two major drawbacks in real clinical settings, i.e., the computationally
expensive sampling process and the insufficient preservation of correspondence
between the conditioning LPET image and the reconstructed PET (RPET) image. To
address the above limitations, this paper presents a coarse-to-fine PET
reconstruction framework that consists of a coarse prediction module (CPM) and
an iterative refinement module (IRM). The CPM generates a coarse PET image via
a deterministic process, and the IRM samples the residual iteratively. By
delegating most of the computational overhead to the CPM, the overall sampling
speed of our method can be significantly improved. Furthermore, two additional
strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion
strategy, are proposed and integrated into the reconstruction process, which
can enhance the correspondence between the LPET image and the RPET image,
further improving clinical reliability. Extensive experiments on two human
brain PET datasets demonstrate that our method outperforms the state-of-the-art
PET reconstruction methods. The source code is available at
\url{https://github.com/Show-han/PET-Reconstruction}.
Related papers
- HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning [55.88910947643436]
We propose a unified framework for continual learning (CL) with pre-trained models (PTMs) and parameter-efficient tuning (PET)
We present Hierarchical Decomposition PET (HiDe-PET), an innovative approach that explicitly optimize the objective through incorporating task-specific and task-shared knowledge.
Our approach demonstrates remarkably superior performance over a broad spectrum of recent strong baselines.
arXiv Detail & Related papers (2024-07-07T01:50:25Z) - Diffusion Transformer Model With Compact Prior for Low-dose PET Reconstruction [7.320877150436869]
We propose a diffusion transformer model (DTM) guided by joint compact prior (JCP) to enhance the reconstruction quality of low-dose PET imaging.
DTM combines the powerful distribution mapping abilities of diffusion models with the capacity of transformers to capture long-range dependencies.
Our approach not only reduces radiation exposure risks but also provides a more reliable PET imaging tool for early disease detection and patient management.
arXiv Detail & Related papers (2024-07-01T03:54:43Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network [14.381830012670969]
Recent methods to generate high-quality PET images from low-dose counterparts have been reported to be state-of-the-art for low-to-high image recovery methods.
To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN)
SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
arXiv Detail & Related papers (2023-10-24T06:43:56Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - A resource-efficient deep learning framework for low-dose brain PET
image reconstruction and analysis [13.713286047709982]
We propose a resource-efficient deep learning framework for L-PET reconstruction and analysis, referred to as transGAN-SDAM.
The transGAN generates higher quality F-PET images, and then the SDAM integrates the spatial information of a sequence of generated F-PET slices to synthesize whole-brain F-PET images.
arXiv Detail & Related papers (2022-02-14T08:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.