TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms
- URL: http://arxiv.org/abs/2308.05365v1
- Date: Thu, 10 Aug 2023 06:20:00 GMT
- Title: TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms
- Authors: Jiaqi Cui, Pinxian Zeng, Xinyi Zeng, Peng Wang, Xi Wu, Jiliu Zhou, Yan
Wang, and Dinggang Shen
- Abstract summary: TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
- Score: 45.24575167909925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To obtain high-quality positron emission tomography (PET) images while
minimizing radiation exposure, various methods have been proposed for
reconstructing standard-dose PET (SPET) images from low-dose PET (LPET)
sinograms directly. However, current methods often neglect boundaries during
sinogram-to-image reconstruction, resulting in high-frequency distortion in the
frequency domain and diminished or fuzzy edges in the reconstructed images.
Furthermore, the convolutional architectures, which are commonly used, lack the
ability to model long-range non-local interactions, potentially leading to
inaccurate representations of global structures. To alleviate these problems,
we propose a transformer-based model that unites triple domains of sinogram,
image, and frequency for direct PET reconstruction, namely TriDo-Former.
Specifically, the TriDo-Former consists of two cascaded networks, i.e., a
sinogram enhancement transformer (SE-Former) for denoising the input LPET
sinograms and a spatial-spectral reconstruction transformer (SSR-Former) for
reconstructing SPET images from the denoised sinograms. Different from the
vanilla transformer that splits an image into 2D patches, based specifically on
the PET imaging mechanism, our SE-Former divides the sinogram into 1D
projection view angles to maintain its inner-structure while denoising,
preventing the noise in the sinogram from prorogating into the image domain.
Moreover, to mitigate high-frequency distortion and improve reconstruction
details, we integrate global frequency parsers (GFPs) into SSR-Former. The GFP
serves as a learnable frequency filter that globally adjusts the frequency
components in the frequency domain, enforcing the network to restore
high-frequency details resembling real SPET images. Validations on a clinical
dataset demonstrate that our TriDo-Former outperforms the state-of-the-art
methods qualitatively and quantitatively.
Related papers
- Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network [14.381830012670969]
Recent methods to generate high-quality PET images from low-dose counterparts have been reported to be state-of-the-art for low-to-high image recovery methods.
To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN)
SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
arXiv Detail & Related papers (2023-10-24T06:43:56Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - CryoFormer: Continuous Heterogeneous Cryo-EM Reconstruction using
Transformer-based Neural Representations [49.49939711956354]
Cryo-electron microscopy (cryo-EM) allows for the high-resolution reconstruction of 3D structures of proteins and other biomolecules.
It is still challenging to reconstruct the continuous motions of 3D structures from noisy and randomly oriented 2D cryo-EM images.
We propose CryoFormer, a new approach for continuous heterogeneous cryo-EM reconstruction.
arXiv Detail & Related papers (2023-03-28T18:59:17Z) - STPDnet: Spatial-temporal convolutional primal dual network for dynamic
PET image reconstruction [16.47493157003075]
We propose a spatial-temporal convolutional primal dual network (STPDnet) for dynamic PET image reconstruction.
The physical projection of PET is embedded in the iterative learning process of the network.
Experiments have shown that the proposed method can achieve substantial noise in both temporal and spatial domains.
arXiv Detail & Related papers (2023-03-08T15:43:15Z) - REGAS: REspiratory-GAted Synthesis of Views for Multi-Phase CBCT
Reconstruction from a single 3D CBCT Acquisition [75.64791080418162]
REGAS proposes a self-supervised method to synthesize the undersampled tomographic views and mitigate aliasing artifacts in reconstructed images.
To address the large memory cost of deep neural networks on high resolution 4D data, REGAS introduces a novel Ray Path Transformation (RPT) that allows for distributed, differentiable forward projections.
arXiv Detail & Related papers (2022-08-17T03:42:19Z) - Direct PET Image Reconstruction Incorporating Deep Image Prior and a
Forward Projection Model [0.0]
Convolutional neural networks (CNNs) have recently achieved remarkable performance in positron emission tomography (PET) image reconstruction.
We propose an unsupervised direct PET image reconstruction method that incorporates a deep image prior framework.
Our proposed method incorporates a forward projection model with a loss function to achieve unsupervised direct PET image reconstruction from sinograms.
arXiv Detail & Related papers (2021-09-02T08:07:58Z) - FastPET: Near Real-Time PET Reconstruction from Histo-Images Using a
Neural Network [0.0]
This paper proposes FastPET, a novel direct reconstruction convolutional neural network that is architecturally simple, memory space efficient.
FastPET operates on a histo-image representation of the raw data enabling it to reconstruct 3D image volumes 67x faster than Ordered subsets Expectation Maximization (OSEM)
The results show that not only are the reconstructions very fast, but the images are high quality and lower noise than iterative reconstructions.
arXiv Detail & Related papers (2020-02-11T20:32:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.