STPDnet: Spatial-temporal convolutional primal dual network for dynamic
PET image reconstruction
- URL: http://arxiv.org/abs/2303.04667v1
- Date: Wed, 8 Mar 2023 15:43:15 GMT
- Title: STPDnet: Spatial-temporal convolutional primal dual network for dynamic
PET image reconstruction
- Authors: Rui Hu, Jianan Cui, Chengjin Yu, Yunmei Chen, Huafeng Liu
- Abstract summary: We propose a spatial-temporal convolutional primal dual network (STPDnet) for dynamic PET image reconstruction.
The physical projection of PET is embedded in the iterative learning process of the network.
Experiments have shown that the proposed method can achieve substantial noise in both temporal and spatial domains.
- Score: 16.47493157003075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic positron emission tomography (dPET) image reconstruction is extremely
challenging due to the limited counts received in individual frame. In this
paper, we propose a spatial-temporal convolutional primal dual network
(STPDnet) for dynamic PET image reconstruction. Both spatial and temporal
correlations are encoded by 3D convolution operators. The physical projection
of PET is embedded in the iterative learning process of the network, which
provides the physical constraints and enhances interpretability. The
experiments of real rat scan data have shown that the proposed method can
achieve substantial noise reduction in both temporal and spatial domains and
outperform the maximum likelihood expectation maximization (MLEM),
spatial-temporal kernel method (KEM-ST), DeepPET and Learned Primal Dual (LPD).
Related papers
- Dynamic 3D Point Cloud Sequences as 2D Videos [81.46246338686478]
3D point cloud sequences serve as one of the most common and practical representation modalities of real-world environments.
We propose a novel generic representation called textitStructured Point Cloud Videos (SPCVs)
SPCVs re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points.
arXiv Detail & Related papers (2024-03-02T08:18:57Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - Image Reconstruction for Accelerated MR Scan with Faster Fourier
Convolutional Neural Networks [87.87578529398019]
Partial scan is a common approach to accelerate Magnetic Resonance Imaging (MRI) data acquisition in both 2D and 3D settings.
We propose a novel convolutional operator called Faster Fourier Convolution (FasterFC) to replace the two consecutive convolution operations.
A 2D accelerated MRI method, FasterFC-End-to-End-VarNet, which uses FasterFC to improve the sensitivity maps and reconstruction quality.
A 3D accelerated MRI method called FasterFC-based Single-to-group Network (FAS-Net) that utilizes a single-to-group algorithm to guide k-space domain reconstruction
arXiv Detail & Related papers (2023-06-05T13:53:57Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Deep Domain Adversarial Adaptation for Photon-efficient Imaging Based on
Spatiotemporal Inception Network [11.58898808789911]
In single-photon LiDAR, photon-efficient imaging captures the 3D structure of a scene by only several signal detected per pixel.
Existing deep learning models for this task are trained on simulated datasets, which poses the domain shift challenge when applied to realistic scenarios.
We propose a network (STIN) for photon-efficient imaging, which is able to precisely predict the depth from a sparse and high-noise photon counting histogram by fully exploiting spatial and temporal information.
arXiv Detail & Related papers (2022-01-07T14:51:48Z) - Direct PET Image Reconstruction Incorporating Deep Image Prior and a
Forward Projection Model [0.0]
Convolutional neural networks (CNNs) have recently achieved remarkable performance in positron emission tomography (PET) image reconstruction.
We propose an unsupervised direct PET image reconstruction method that incorporates a deep image prior framework.
Our proposed method incorporates a forward projection model with a loss function to achieve unsupervised direct PET image reconstruction from sinograms.
arXiv Detail & Related papers (2021-09-02T08:07:58Z) - Direct Reconstruction of Linear Parametric Images from Dynamic PET Using
Nonlocal Deep Image Prior [13.747210115485487]
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms.
Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited.
Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available.
arXiv Detail & Related papers (2021-06-18T21:30:22Z) - FastPET: Near Real-Time PET Reconstruction from Histo-Images Using a
Neural Network [0.0]
This paper proposes FastPET, a novel direct reconstruction convolutional neural network that is architecturally simple, memory space efficient.
FastPET operates on a histo-image representation of the raw data enabling it to reconstruct 3D image volumes 67x faster than Ordered subsets Expectation Maximization (OSEM)
The results show that not only are the reconstructions very fast, but the images are high quality and lower noise than iterative reconstructions.
arXiv Detail & Related papers (2020-02-11T20:32:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.