FastPET: Near Real-Time PET Reconstruction from Histo-Images Using a
Neural Network
- URL: http://arxiv.org/abs/2002.04665v2
- Date: Mon, 15 Jun 2020 15:07:59 GMT
- Title: FastPET: Near Real-Time PET Reconstruction from Histo-Images Using a
Neural Network
- Authors: William Whiteley, Vladimir Panin, Chuanyu Zhou, Jorge Cabello, Deepak
Bharkhada and Jens Gregor
- Abstract summary: This paper proposes FastPET, a novel direct reconstruction convolutional neural network that is architecturally simple, memory space efficient.
FastPET operates on a histo-image representation of the raw data enabling it to reconstruct 3D image volumes 67x faster than Ordered subsets Expectation Maximization (OSEM)
The results show that not only are the reconstructions very fast, but the images are high quality and lower noise than iterative reconstructions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Direct reconstruction of positron emission tomography (PET) data using deep
neural networks is a growing field of research. Initial results are promising,
but often the networks are complex, memory utilization inefficient, produce
relatively small 2D image slices (e.g., 128x128), and low count rate
reconstructions are of varying quality. This paper proposes FastPET, a novel
direct reconstruction convolutional neural network that is architecturally
simple, memory space efficient, works for non-trivial 3D image volumes and is
capable of processing a wide spectrum of PET data including low-dose and
multi-tracer applications. FastPET uniquely operates on a histo-image (i.e.,
image-space) representation of the raw data enabling it to reconstruct 3D image
volumes 67x faster than Ordered subsets Expectation Maximization (OSEM). We
detail the FastPET method trained on whole-body and low-dose whole-body data
sets and explore qualitative and quantitative aspects of reconstructed images
from clinical and phantom studies. Additionally, we explore the application of
FastPET on a neurology data set containing multiple different tracers. The
results show that not only are the reconstructions very fast, but the images
are high quality and lower noise than iterative reconstructions.
Related papers
- StoDIP: Efficient 3D MRF image reconstruction with deep image priors and stochastic iterations [3.4453266252081645]
We introduce StoDIP, a new algorithm that extends the ground-truth-free Deep Image Prior (DIP) reconstruction to 3D MRF imaging.
tested on a dataset of whole-brain scans from healthy volunteers, StoDIP demonstrated superior performance over the ground-truth-free reconstruction baselines, both quantitatively and qualitatively.
arXiv Detail & Related papers (2024-08-05T10:32:06Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - STPDnet: Spatial-temporal convolutional primal dual network for dynamic
PET image reconstruction [16.47493157003075]
We propose a spatial-temporal convolutional primal dual network (STPDnet) for dynamic PET image reconstruction.
The physical projection of PET is embedded in the iterative learning process of the network.
Experiments have shown that the proposed method can achieve substantial noise in both temporal and spatial domains.
arXiv Detail & Related papers (2023-03-08T15:43:15Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - REGAS: REspiratory-GAted Synthesis of Views for Multi-Phase CBCT
Reconstruction from a single 3D CBCT Acquisition [75.64791080418162]
REGAS proposes a self-supervised method to synthesize the undersampled tomographic views and mitigate aliasing artifacts in reconstructed images.
To address the large memory cost of deep neural networks on high resolution 4D data, REGAS introduces a novel Ray Path Transformation (RPT) that allows for distributed, differentiable forward projections.
arXiv Detail & Related papers (2022-08-17T03:42:19Z) - List-Mode PET Image Reconstruction Using Deep Image Prior [3.6427817678422016]
List-mode positron emission tomography (PET) image reconstruction is an important tool for PET scanners.
Deep learning is one possible solution to enhance the quality of PET image reconstruction.
In this study, we propose a novel list-mode PET image reconstruction method using an unsupervised CNN called deep image prior.
arXiv Detail & Related papers (2022-04-28T10:44:33Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Direct PET Image Reconstruction Incorporating Deep Image Prior and a
Forward Projection Model [0.0]
Convolutional neural networks (CNNs) have recently achieved remarkable performance in positron emission tomography (PET) image reconstruction.
We propose an unsupervised direct PET image reconstruction method that incorporates a deep image prior framework.
Our proposed method incorporates a forward projection model with a loss function to achieve unsupervised direct PET image reconstruction from sinograms.
arXiv Detail & Related papers (2021-09-02T08:07:58Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.