CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images
- URL: http://arxiv.org/abs/2304.00725v1
- Date: Mon, 3 Apr 2023 05:39:02 GMT
- Title: CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images
- Authors: Yuxin Xue, Yige Peng, Lei Bi, and Dagan Feng, Jinman Kim
- Abstract summary: High radioactivity caused by the injected tracer dose is a major concern in PET imaging.
Reducing the dose leads to inadequate image quality for diagnostic practice.
CNNs-based methods have been developed for high quality PET synthesis from its low-dose counterparts.
- Score: 10.994223928445589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Positron emission tomography (PET) is the most sensitive molecular imaging
modality routinely applied in our modern healthcare. High radioactivity caused
by the injected tracer dose is a major concern in PET imaging and limits its
clinical applications. However, reducing the dose leads to inadequate image
quality for diagnostic practice. Motivated by the need to produce high quality
images with minimum low-dose, Convolutional Neural Networks (CNNs) based
methods have been developed for high quality PET synthesis from its low-dose
counterparts. Previous CNNs-based studies usually directly map low-dose PET
into features space without consideration of different dose reduction level. In
this study, a novel approach named CG-3DSRGAN (Classification-Guided Generative
Adversarial Network with Super Resolution Refinement) is presented.
Specifically, a multi-tasking coarse generator, guided by a classification
head, allows for a more comprehensive understanding of the noise-level features
present in the low-dose data, resulting in improved image synthesis. Moreover,
to recover spatial details of standard PET, an auxiliary super resolution
network - Contextual-Net - is proposed as a second-stage training to narrow the
gap between coarse prediction and standard PET. We compared our method to the
state-of-the-art methods on whole-body PET with different dose reduction
factors (DRFs). Experiments demonstrate our method can outperform others on all
DRF.
Related papers
- Diffusion Transformer Model With Compact Prior for Low-dose PET Reconstruction [7.320877150436869]
We propose a diffusion transformer model (DTM) guided by joint compact prior (JCP) to enhance the reconstruction quality of low-dose PET imaging.
DTM combines the powerful distribution mapping abilities of diffusion models with the capacity of transformers to capture long-range dependencies.
Our approach not only reduces radiation exposure risks but also provides a more reliable PET imaging tool for early disease detection and patient management.
arXiv Detail & Related papers (2024-07-01T03:54:43Z) - 2.5D Multi-view Averaging Diffusion Model for 3D Medical Image Translation: Application to Low-count PET Reconstruction with CT-less Attenuation Correction [17.897681480967087]
Positron Emission Tomography (PET) is an important clinical imaging tool but inevitably introduces radiation hazards to patients and healthcare providers.
It is desirable to develop 3D methods to translate the non-attenuation-corrected low-dose PET into attenuation-corrected standard-dose PET.
Recent diffusion models have emerged as a new state-of-the-art deep learning method for image-to-image translation, better than traditional CNN-based methods.
We developed a novel 2.5D Multi-view Averaging Diffusion Model (MADM) for 3D image-to-image translation with application on NAC
arXiv Detail & Related papers (2024-06-12T16:22:41Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network [14.381830012670969]
Recent methods to generate high-quality PET images from low-dose counterparts have been reported to be state-of-the-art for low-to-high image recovery methods.
To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN)
SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
arXiv Detail & Related papers (2023-10-24T06:43:56Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - Anatomical-Guided Attention Enhances Unsupervised PET Image Denoising
Performance [0.0]
We propose an unsupervised 3D PET image denoising method based on anatomical information-guided attention mechanism.
Our proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively.
arXiv Detail & Related papers (2021-09-02T09:27:07Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.