A computationally efficient reconstruction algorithm for circular
cone-beam computed tomography using shallow neural networks
- URL: http://arxiv.org/abs/2010.00421v1
- Date: Thu, 1 Oct 2020 14:10:23 GMT
- Title: A computationally efficient reconstruction algorithm for circular
cone-beam computed tomography using shallow neural networks
- Authors: Marinus J. Lagerwerf, Daniel M Pelt, Willem Jan Palenstijn, K Joost
Batenburg
- Abstract summary: We introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK) algorithm.
It adds a machine learning component to the FDK algorithm to improve its reconstruction accuracy while maintaining its computational efficiency.
We show that the training time of an NN-FDK network is orders of magnitude lower than the considered deep neural networks, with only a slight reduction in reconstruction accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Circular cone-beam (CCB) Computed Tomography (CT) has become an integral part
of industrial quality control, materials science and medical imaging. The need
to acquire and process each scan in a short time naturally leads to trade-offs
between speed and reconstruction quality, creating a need for fast
reconstruction algorithms capable of creating accurate reconstructions from
limited data.
In this paper we introduce the Neural Network Feldkamp-Davis-Kress (NN-FDK)
algorithm. This algorithm adds a machine learning component to the FDK
algorithm to improve its reconstruction accuracy while maintaining its
computational efficiency. Moreover, the NN-FDK algorithm is designed such that
it has low training data requirements and is fast to train. This ensures that
the proposed algorithm can be used to improve image quality in high throughput
CT scanning settings, where FDK is currently used to keep pace with the
acquisition speed using readily available computational resources.
We compare the NN-FDK algorithm to two standard CT reconstruction algorithms
and to two popular deep neural networks trained to remove reconstruction
artifacts from the 2D slices of an FDK reconstruction. We show that the NN-FDK
reconstruction algorithm is substantially faster in computing a reconstruction
than all the tested alternative methods except for the standard FDK algorithm
and we show it can compute accurate CCB CT reconstructions in cases of high
noise, a low number of projection angles or large cone angles. Moreover, we
show that the training time of an NN-FDK network is orders of magnitude lower
than the considered deep neural networks, with only a slight reduction in
reconstruction accuracy.
Related papers
- Fast and accurate sparse-view CBCT reconstruction using meta-learned
neural attenuation field and hash-encoding regularization [13.01191568245715]
Cone beam computed tomography (CBCT) is an emerging medical imaging technique to visualize the internal anatomical structures of patients.
reducing the number of projections in a CBCT scan while preserving the quality of a reconstructed image is challenging.
We propose a fast and accurate sparse-view CBCT reconstruction (FACT) method to provide better reconstruction quality and faster optimization speed.
arXiv Detail & Related papers (2023-12-04T07:23:44Z) - Reinforcement Learning for Sampling on Temporal Medical Imaging
Sequences [0.0]
In this work, we apply double deep Q-learning and REINFORCE algorithms to learn the sampling strategy for dynamic image reconstruction.
We consider the data in the format of time series, and the reconstruction method is a pre-trained autoencoder-typed neural network.
We present a proof of concept that reinforcement learning algorithms are effective to discover the optimal sampling pattern.
arXiv Detail & Related papers (2023-08-28T23:55:23Z) - Untrained neural network embedded Fourier phase retrieval from few
measurements [8.914156789222266]
This paper proposes an untrained neural network embedded algorithm to solve FPR with few measurements.
We use a generative network to represent the image to be recovered, which confines the image to the space defined by the network structure.
To reduce the computational cost mainly caused by the parameter updates of the untrained NN, we develop an accelerated algorithm that adaptively trades off between explicit and implicit regularization.
arXiv Detail & Related papers (2023-07-16T16:23:50Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Deep Learning Neural Network for Lung Cancer Classification: Enhanced
Optimization Function [28.201018420730332]
The aim of this work is to increase the overall prediction accuracy along with reducing processing time by using multispace image in pooling layer of convolution neural network.
The proposed method has the autoencoder system to improve the overall accuracy, and to predict lung cancer by using multispace image in pooling layer of convolution neural network.
arXiv Detail & Related papers (2022-08-05T18:41:17Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - DCT-SNN: Using DCT to Distribute Spatial Information over Time for
Learning Low-Latency Spiking Neural Networks [7.876001630578417]
Spiking Neural Networks (SNNs) offer a promising alternative to traditional deep learning frameworks.
SNNs suffer from high inference latency which is a major bottleneck to their deployment.
We propose a scalable time-based encoding scheme that utilizes the Discrete Cosine Transform (DCT) to reduce the number of timesteps required for inference.
arXiv Detail & Related papers (2020-10-05T05:55:34Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.