Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised
and Adversarial Learning from Scarce Training Data
- URL: http://arxiv.org/abs/2201.09318v1
- Date: Sun, 23 Jan 2022 17:08:52 GMT
- Title: Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised
and Adversarial Learning from Scarce Training Data
- Authors: Anish Lahiri, Marc Klasky, Jeffrey A. Fessler and Saiprasad
Ravishankar
- Abstract summary: As the number of available projections decreases, traditional reconstruction techniques perform poorly.
Deep learning-based reconstruction have garnered a lot of attention in applications because they yield better performance when enough training data is available.
This work focuses on image reconstruction in such settings, when both the number of available CT projections and the training data is extremely limited.
- Score: 27.325532306485755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstruction of CT images from a limited set of projections through an
object is important in several applications ranging from medical imaging to
industrial settings. As the number of available projections decreases,
traditional reconstruction techniques such as the FDK algorithm and model-based
iterative reconstruction methods perform poorly. Recently, data-driven methods
such as deep learning-based reconstruction have garnered a lot of attention in
applications because they yield better performance when enough training data is
available. However, even these methods have their limitations when there is a
scarcity of available training data. This work focuses on image reconstruction
in such settings, i.e., when both the number of available CT projections and
the training data is extremely limited. We adopt a sequential reconstruction
approach over several stages using an adversarially trained shallow network for
'destreaking' followed by a data-consistency update in each stage. To deal with
the challenge of limited data, we use image subvolumes to train our method, and
patch aggregation during testing. To deal with the computational challenge of
learning on 3D datasets for 3D reconstruction, we use a hybrid 3D-to-2D mapping
network for the 'destreaking' part. Comparisons to other methods over several
test examples indicate that the proposed method has much potential, when both
the number of projections and available training data are highly limited.
Related papers
- Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning -- A Review [0.0]
Deep learning techniques have been used to improve image quality in cone-beam computed tomography (CBCT)
We provide an overview of deep learning techniques that have successfully been shown to reduce artifacts in 3D, as well as in time-resolved (4D) CBCT.
One of the key findings of this work is an observed trend towards the use of generative models including GANs and score-based or diffusion models.
arXiv Detail & Related papers (2024-03-27T13:46:01Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Simulator-Based Self-Supervision for Learned 3D Tomography
Reconstruction [34.93595625809309]
Prior machine learning approaches require reference reconstructions computed by another algorithm for training.
We train our model in a fully self-supervised manner using only noisy 2D X-ray data.
Our results show significantly higher visual fidelity and better PSNR over techniques that rely on existing reconstructions.
arXiv Detail & Related papers (2022-12-14T13:21:37Z) - Deep Learning for Material Decomposition in Photon-Counting CT [0.5801044612920815]
We present a novel deep-learning solution for material decomposition in PCCT, based on an unrolled/unfolded iterative network.
Our approach outperforms a maximum likelihood estimation, a variational method, as well as a fully-learned network.
arXiv Detail & Related papers (2022-08-05T19:05:16Z) - DH-GAN: A Physics-driven Untrained Generative Adversarial Network for 3D
Microscopic Imaging using Digital Holography [3.4635026053111484]
Digital holography is a 3D imaging technique by emitting a laser beam with a plane wavefront to an object and measuring the intensity of the diffracted waveform, called holograms.
Recently, deep learning (DL) methods have been used for more accurate holographic processing.
We propose a new DL architecture based on generative adversarial networks that uses a discriminative network for realizing a semantic measure for reconstruction quality.
arXiv Detail & Related papers (2022-05-25T17:13:45Z) - 3D helical CT Reconstruction with a Memory Efficient Learned Primal-Dual
Architecture [1.3518297878940662]
This paper modifies a domain adapted neural network architecture, the Learned Primal-Dual (LPD), so it can be trained and applied to reconstruction in this setting.
It is the first to apply an unrolled deep learning architecture for reconstruction on full-sized clinical data.
arXiv Detail & Related papers (2022-05-24T10:32:32Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.