Optimizing CT Scan Geometries With and Without Gradients
- URL: http://arxiv.org/abs/2302.06251v1
- Date: Mon, 13 Feb 2023 10:44:41 GMT
- Title: Optimizing CT Scan Geometries With and Without Gradients
- Authors: Mareike Thies, Fabian Wagner, Noah Maul, Laura Pfaff, Linda-Sophie
Schneider, Christopher Syben, Andreas Maier
- Abstract summary: We show that gradient-based optimization algorithms are a possible alternative to gradient-free algorithms.
gradient-based algorithms converge substantially faster while being comparable to gradient-free algorithms in terms of capture range and robustness to the number of free parameters.
- Score: 7.788823739816626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In computed tomography (CT), the projection geometry used for data
acquisition needs to be known precisely to obtain a clear reconstructed image.
Rigid patient motion is a cause for misalignment between measured data and
employed geometry. Commonly, such motion is compensated by solving an
optimization problem that, e.g., maximizes the quality of the reconstructed
image with respect to the projection geometry. So far, gradient-free
optimization algorithms have been utilized to find the solution for this
problem. Here, we show that gradient-based optimization algorithms are a
possible alternative and compare the performance to their gradient-free
counterparts on a benchmark motion compensation problem. Gradient-based
algorithms converge substantially faster while being comparable to
gradient-free algorithms in terms of capture range and robustness to the number
of free parameters. Hence, gradient-based optimization is a viable alternative
for the given type of problems.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Gradient-free neural topology optimization [0.0]
gradient-free algorithms require many more iterations to converge when compared to gradient-based algorithms.
This has made them unviable for topology optimization due to the high computational cost per iteration and high dimensionality of these problems.
We propose a pre-trained neural reparameterization strategy that leads to at least one order of magnitude decrease in iteration count when optimizing the designs in latent space.
arXiv Detail & Related papers (2024-03-07T23:00:49Z) - Efficient and Accurate Optimal Transport with Mirror Descent and
Conjugate Gradients [15.128885770407132]
We design a novel algorithm for optimal transport by drawing from the entropic optimal transport, mirror descent and conjugate gradients literatures.
Our scalable and GPU parallelizable algorithm is able to compute the Wasserstein distance with extreme precision, reaching relative error rates of $10-8$ without numerical stability issues.
arXiv Detail & Related papers (2023-07-17T14:09:43Z) - A Variance-Reduced Stochastic Gradient Tracking Algorithm for
Decentralized Optimization with Orthogonality Constraints [7.028225540638832]
We propose a novel algorithm for decentralized optimization with orthogonality constraints.
VRSGT is the first algorithm for decentralized optimization with orthogonality constraints that reduces both sampling and communication complexities simultaneously.
In the numerical experiments, VRGTS has a promising performance in a real-world autonomous sample.
arXiv Detail & Related papers (2022-08-29T14:46:44Z) - Fast Multi-grid Methods for Minimizing Curvature Energy [6.882141405929301]
We propose fast multi-grid algorithms for minimizing mean curvature and Gaussian curvature energy functionals.
No artificial parameters are introduced in our formulation, which guarantees the robustness of the proposed algorithm.
Numerical experiments are presented on both image denoising and CT reconstruction problem to demonstrate the ability to recover image texture.
arXiv Detail & Related papers (2022-04-17T04:34:38Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization [0.0]
We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
arXiv Detail & Related papers (2020-06-18T22:47:58Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.