Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images
- URL: http://arxiv.org/abs/2005.10034v1
- Date: Wed, 20 May 2020 13:30:49 GMT
- Title: Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images
- Authors: Yixing Huang, Alexander Preuhs, Michael Manhart, Guenter Lauritsch,
Andreas Maier
- Abstract summary: We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
- Score: 70.13735569016752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image reconstruction from insufficient data is common in computed tomography
(CT), e.g., image reconstruction from truncated data, limited-angle data and
sparse-view data. Deep learning has achieved impressive results in this field.
However, the robustness of deep learning methods is still a concern for
clinical applications due to the following two challenges: a) With limited
access to sufficient training data, a learned deep learning model may not
generalize well to unseen data; b) Deep learning models are sensitive to noise.
Therefore, the quality of images processed by neural networks only may be
inadequate. In this work, we investigate the robustness of deep learning in CT
image reconstruction by showing false negative and false positive lesion cases.
Since learning-based images with incorrect structures are likely not consistent
with measured projection data, we propose a data consistent reconstruction
(DCR) method to improve their image quality, which combines the advantages of
compressed sensing and deep learning: First, a prior image is generated by deep
learning. Afterwards, unmeasured projection data are inpainted by forward
projection of the prior image. Finally, iterative reconstruction with
reweighted total variation regularization is applied, integrating data
consistency for measured data and learned prior information for missing data.
The efficacy of the proposed method is demonstrated in cone-beam CT with
truncated data, limited-angle data and sparse-view data, respectively. For
example, for truncated data, DCR achieves a mean root-mean-square error of 24
HU and a mean structure similarity index of 0.999 inside the field-of-view for
different patients in the noisy case, while the state-of-the-art U-Net method
achieves 55 HU and 0.995 respectively for these two metrics.
Related papers
- Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction [0.5497663232622965]
We propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO)
The approach is tested on full-dose sparse-view CT using multiple training dataset sizes and varying numbers of viewing angles.
arXiv Detail & Related papers (2023-07-31T13:47:33Z) - Generative Modeling in Sinogram Domain for Sparse-view CT Reconstruction [12.932897771104825]
radiation dose in computed tomography (CT) examinations can be significantly reduced by intuitively decreasing the number of projection views.
Previous deep learning techniques with sparse-view data require sparse-view/full-view CT image pairs to train the network with supervised manners.
We present a fully unsupervised score-based generative model in sinogram domain for sparse-view CT reconstruction.
arXiv Detail & Related papers (2022-11-25T06:49:18Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Inflation of test accuracy due to data leakage in deep learning-based
classification of OCT images [0.0]
In this study, the effect of improper dataset splitting on model evaluation is demonstrated for two classification tasks.
Our results show that the classification accuracy is inflated by 3.9 to 26 percentage units for models tested on a dataset with improper splitting.
arXiv Detail & Related papers (2022-02-21T14:08:42Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Learning Topology from Synthetic Data for Unsupervised Depth Completion [66.26787962258346]
We present a method for inferring dense depth maps from images and sparse depth measurements.
We learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map.
arXiv Detail & Related papers (2021-06-06T00:21:12Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.