Deep Learning Computed Tomography based on the Defrise and Clack
Algorithm
- URL: http://arxiv.org/abs/2403.00426v1
- Date: Fri, 1 Mar 2024 10:24:04 GMT
- Title: Deep Learning Computed Tomography based on the Defrise and Clack
Algorithm
- Authors: Chengze Ye, Linda-Sophie Schneider, Yipeng Sun, Andreas Maier
- Abstract summary: This study presents a novel approach for reconstructing cone beam computed tomography (CBCT) for specific orbits using known operator learning.
Unlike traditional methods, this technique employs a filtered backprojection type (FBP-type) algorithm, which integrates a unique, adaptive filtering process.
The filter is designed for a specific orbit geometry and is obtained using a data-driven approach based on deep learning.
- Score: 4.137125610532773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study presents a novel approach for reconstructing cone beam computed
tomography (CBCT) for specific orbits using known operator learning. Unlike
traditional methods, this technique employs a filtered backprojection type
(FBP-type) algorithm, which integrates a unique, adaptive filtering process.
This process involves a series of operations, including weightings,
differentiations, the 2D Radon transform, and backprojection. The filter is
designed for a specific orbit geometry and is obtained using a data-driven
approach based on deep learning. The approach efficiently learns and optimizes
the orbit-related component of the filter. The method has demonstrated its
ability through experimentation by successfully learning parameters from
circular orbit projection data. Subsequently, the optimized parameters are used
to reconstruct images, resulting in outcomes that closely resemble the
analytical solution. This demonstrates the potential of the method to learn
appropriate parameters from any specific orbit projection data and achieve
reconstruction. The algorithm has demonstrated improvement, particularly in
enhancing reconstruction speed and reducing memory usage for handling specific
orbit reconstruction.
Related papers
- Machine Learning Training Optimization using the Barycentric Correction
Procedure [0.0]
This study proposes combining machine learning algorithms with an efficient methodology known as the barycentric correction procedure (BCP)
It was found that this combination provides significant benefits related to time in synthetic and real data without losing accuracy when the number of instances and dimensions increases.
arXiv Detail & Related papers (2024-03-01T13:56:36Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Convolutional Analysis Operator Learning by End-To-End Training of
Iterative Neural Networks [3.6280929178575994]
We show how convolutional sparsifying filters can be efficiently learned by end-to-end training of iterative neural networks.
We evaluated our approach on a non-Cartesian 2D cardiac cine MRI example and show that the obtained filters are better suitable for the corresponding reconstruction algorithm than the ones obtained by decoupled pre-training.
arXiv Detail & Related papers (2022-03-04T07:32:16Z) - Iterated Block Particle Filter for High-dimensional Parameter Learning:
Beating the Curse of Dimensionality [0.6599344783327054]
temporal disease learning for high-dimensional, partially observed, and nonlinear processes is a methodological challenge.
We propose the iterated block particle filter (IBPF) for learning high-dimensional inference parameters over graphical state space models.
arXiv Detail & Related papers (2021-10-20T19:36:55Z) - SPECT Angle Interpolation Based on Deep Learning Methodologies [0.0]
A novel method for SPECT angle based on deep learning methodologies is presented.
Projection data from software phantoms were used to train the proposed model.
For evaluation of the efficacy of the method, phantoms based on Shepp Logan, with various noise levels added were used.
The resulting interpolated sinograms are reconstructed using Ordered Subset Expectation Maximization (OSEM) and compared to the reconstructions of the original sinograms.
arXiv Detail & Related papers (2021-08-09T09:19:51Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.