A resource-efficient deep learning framework for low-dose brain PET
image reconstruction and analysis
- URL: http://arxiv.org/abs/2202.06548v1
- Date: Mon, 14 Feb 2022 08:40:19 GMT
- Title: A resource-efficient deep learning framework for low-dose brain PET
image reconstruction and analysis
- Authors: Yu Fu, Shunjie Dong, Yi Liao, Le Xue, Yuanfan Xu, Feng Li, Qianqian
Yang, Tianbai Yu, Mei Tian and Cheng Zhuo
- Abstract summary: We propose a resource-efficient deep learning framework for L-PET reconstruction and analysis, referred to as transGAN-SDAM.
The transGAN generates higher quality F-PET images, and then the SDAM integrates the spatial information of a sequence of generated F-PET slices to synthesize whole-brain F-PET images.
- Score: 13.713286047709982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 18F-fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) imaging
usually needs a full-dose radioactive tracer to obtain satisfactory diagnostic
results, which raises concerns about the potential health risks of radiation
exposure, especially for pediatric patients. Reconstructing the low-dose PET
(L-PET) images to the high-quality full-dose PET (F-PET) ones is an effective
way that both reduces the radiation exposure and remains diagnostic accuracy.
In this paper, we propose a resource-efficient deep learning framework for
L-PET reconstruction and analysis, referred to as transGAN-SDAM, to generate
F-PET from corresponding L-PET, and quantify the standard uptake value ratios
(SUVRs) of these generated F-PET at whole brain. The transGAN-SDAM consists of
two modules: a transformer-encoded Generative Adversarial Network (transGAN)
and a Spatial Deformable Aggregation Module (SDAM). The transGAN generates
higher quality F-PET images, and then the SDAM integrates the spatial
information of a sequence of generated F-PET slices to synthesize whole-brain
F-PET images. Experimental results demonstrate the superiority and rationality
of our approach.
Related papers
- Diffusion Transformer Model With Compact Prior for Low-dose PET Reconstruction [7.320877150436869]
We propose a diffusion transformer model (DTM) guided by joint compact prior (JCP) to enhance the reconstruction quality of low-dose PET imaging.
DTM combines the powerful distribution mapping abilities of diffusion models with the capacity of transformers to capture long-range dependencies.
Our approach not only reduces radiation exposure risks but also provides a more reliable PET imaging tool for early disease detection and patient management.
arXiv Detail & Related papers (2024-07-01T03:54:43Z) - Functional Imaging Constrained Diffusion for Brain PET Synthesis from Structural MRI [5.190302448685122]
We propose a framework for 3D brain PET image synthesis with paired structural MRI as input condition, through a new constrained diffusion model (CDM)
The FICD introduces noise to PET and then progressively removes it with CDM, ensuring high output fidelity throughout a stable training phase.
The CDM learns to predict denoised PET with a functional imaging constraint introduced to ensure voxel-wise alignment between each denoised PET and its ground truth.
arXiv Detail & Related papers (2024-05-03T22:33:46Z) - Three-Dimensional Amyloid-Beta PET Synthesis from Structural MRI with Conditional Generative Adversarial Networks [45.426889188365685]
Alzheimer's Disease hallmarks include amyloid-beta deposits and brain atrophy.
PET is expensive, invasive and exposes patients to ionizing radiation.
MRI is cheaper, non-invasive, and free from ionizing radiation but limited to measuring brain atrophy.
arXiv Detail & Related papers (2024-05-03T14:10:29Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - PET Tracer Conversion among Brain PET via Variable Augmented Invertible
Network [8.895830601854534]
A tracer conversion invertible neural network (TC-INN) for image projection is developed to map FDG images to DOPA images through deep learning.
Experimental results exhibited excellent generation capability in mapping between FDG and DOPA, suggesting that PET tracer conversion has great potential in the case of limited tracer applications.
arXiv Detail & Related papers (2023-11-01T12:04:33Z) - Amyloid-Beta Axial Plane PET Synthesis from Structural MRI: An Image
Translation Approach for Screening Alzheimer's Disease [49.62561299282114]
An image translation model is implemented to produce synthetic amyloid-beta PET images from structural MRI that are quantitatively accurate.
We found that the synthetic PET images could be produced with a high degree of similarity to truth in terms of shape, contrast and overall high SSIM and PSNR.
arXiv Detail & Related papers (2023-09-01T16:26:42Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images [10.994223928445589]
High radioactivity caused by the injected tracer dose is a major concern in PET imaging.
Reducing the dose leads to inadequate image quality for diagnostic practice.
CNNs-based methods have been developed for high quality PET synthesis from its low-dose counterparts.
arXiv Detail & Related papers (2023-04-03T05:39:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.