Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction
- URL: http://arxiv.org/abs/2402.00376v1
- Date: Thu, 1 Feb 2024 06:47:56 GMT
- Title: Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction
- Authors: Jiaqi Cui, Yan Wang, Lu Wen, Pinxian Zeng, Xi Wu, Jiliu Zhou, Dinggang
Shen
- Abstract summary: We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
- Score: 47.398304117228584
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: To obtain high-quality Positron emission tomography (PET) images while
minimizing radiation exposure, numerous methods have been proposed to
reconstruct standard-dose PET (SPET) images from the corresponding low-dose PET
(LPET) images. However, these methods heavily rely on voxel-based
representations, which fall short of adequately accounting for the precise
structure and fine-grained context, leading to compromised reconstruction. In
this paper, we propose a 3D point-based context clusters GAN, namely PCC-GAN,
to reconstruct high-quality SPET images from LPET. Specifically, inspired by
the geometric representation power of points, we resort to a point-based
representation to enhance the explicit expression of the image structure, thus
facilitating the reconstruction with finer details. Moreover, a context
clustering strategy is applied to explore the contextual relationships among
points, which mitigates the ambiguities of small structures in the
reconstructed images. Experiments on both clinical and phantom datasets
demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction
methods qualitatively and quantitatively. Code is available at
https://github.com/gluucose/PCCGAN.
Related papers
- Deep kernel representations of latent space features for low-dose PET-MR imaging robust to variable dose reduction [0.09362267584678274]
Low-dose positron emission tomography (PET) image reconstruction methods have potential to significantly improve PET as an imaging modality.
Deep learning provides a promising means of incorporating prior information into the image reconstruction problem to produce quantitatively accurate images from compromised signal.
We present a method which explicitly models deep latent space features using a robust kernel representation, providing robust performance on previously unseen dose reduction factors.
arXiv Detail & Related papers (2024-09-10T03:57:31Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - Fully 3D Implementation of the End-to-end Deep Image Prior-based PET
Image Reconstruction Using Block Iterative Algorithm [0.0]
Deep image prior (DIP) has attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction.
We present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method.
arXiv Detail & Related papers (2022-12-22T16:25:58Z) - Structure-Preserving Image Super-Resolution [94.16949589128296]
Structures matter in single image super-resolution (SISR)
Recent studies have promoted the development of SISR by recovering photo-realistic images.
However, there are still undesired structural distortions in the recovered images.
arXiv Detail & Related papers (2021-09-26T08:48:27Z) - Direct PET Image Reconstruction Incorporating Deep Image Prior and a
Forward Projection Model [0.0]
Convolutional neural networks (CNNs) have recently achieved remarkable performance in positron emission tomography (PET) image reconstruction.
We propose an unsupervised direct PET image reconstruction method that incorporates a deep image prior framework.
Our proposed method incorporates a forward projection model with a loss function to achieve unsupervised direct PET image reconstruction from sinograms.
arXiv Detail & Related papers (2021-09-02T08:07:58Z) - Structure-Preserving Super Resolution with Gradient Guidance [87.79271975960764]
Structures matter in single image super resolution (SISR)
Recent studies benefiting from generative adversarial network (GAN) have promoted the development of SISR.
However, there are always undesired structural distortions in the recovered images.
arXiv Detail & Related papers (2020-03-29T17:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.