Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration
- URL: http://arxiv.org/abs/2109.12688v1
- Date: Sun, 26 Sep 2021 19:56:45 GMT
- Title: Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration
- Authors: Alexander Thorley, Xi Jia, Hyung Jin Chang, Boyang Liu, Karina
Bunting, Victoria Stoll, Antonio de Marvao, Declan P. O'Regan, Georgios
Gkoutos, Dipak Kotecha, Jinming Duan
- Abstract summary: Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
- Score: 63.15453821022452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deterministic approaches using iterative optimisation have been historically
successful in diffeomorphic image registration (DiffIR). Although these
approaches are highly accurate, they typically carry a significant
computational burden. Recent developments in stochastic approaches based on
deep learning have achieved sub-second runtimes for DiffIR with competitive
registration accuracy, offering a fast alternative to conventional iterative
methods. In this paper, we attempt to reduce this difference in speed whilst
retaining the performance advantage of iterative approaches in DiffIR. We first
propose a simple iterative scheme that functionally composes intermediate
non-stationary velocity fields to handle large deformations in images whilst
guaranteeing diffeomorphisms in the resultant deformation. We then propose a
convex optimisation model that uses a regularisation term of arbitrary order to
impose smoothness on these velocity fields and solve this model with a fast
algorithm that combines Nesterov gradient descent and the alternating direction
method of multipliers (ADMM). Finally, we leverage the computational power of
GPU to implement this accelerated ADMM solver on a 3D cardiac MRI dataset,
further reducing runtime to less than 2 seconds. In addition to producing
strictly diffeomorphic deformations, our methods outperform both
state-of-the-art deep learning-based and iterative DiffIR approaches in terms
of dice and Hausdorff scores, with speed approaching the inference time of deep
learning-based methods.
Related papers
- The Stochastic Conjugate Subgradient Algorithm For Kernel Support Vector Machines [1.738375118265695]
This paper proposes an innovative method specifically designed for kernel support vector machines (SVMs)
It not only achieves faster iteration per iteration but also exhibits enhanced convergence when compared to conventional SFO techniques.
Our experimental results demonstrate that the proposed algorithm not only maintains but potentially exceeds the scalability of SFO methods.
arXiv Detail & Related papers (2024-07-30T17:03:19Z) - Variational Bayes image restoration with compressive autoencoders [4.879530644978008]
Regularization of inverse problems is of paramount importance in computational imaging.
In this work, we first propose to use compressive autoencoders instead of state-of-the-art generative models.
As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm.
arXiv Detail & Related papers (2023-11-29T15:49:31Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Deep learning numerical methods for high-dimensional fully nonlinear
PIDEs and coupled FBSDEs with jumps [26.28912742740653]
We propose a deep learning algorithm for solving high-dimensional parabolic integro-differential equations (PIDEs)
The jump-diffusion process are derived by a Brownian motion and an independent compensated Poisson random measure.
To derive the error estimates for this deep learning algorithm, the convergence of Markovian, the error bound of Euler time discretization, and the simulation error of deep learning algorithm are investigated.
arXiv Detail & Related papers (2023-01-30T13:55:42Z) - SHINE: SHaring the INverse Estimate from the forward pass for bi-level
optimization and implicit models [15.541264326378366]
In recent years, implicit deep learning has emerged as a method to increase the depth of deep neural networks.
The training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.
We propose a novel strategy to tackle this computational bottleneck from which many bi-level problems suffer.
arXiv Detail & Related papers (2021-06-01T15:07:34Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks [86.88061841975482]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
arXiv Detail & Related papers (2020-10-08T18:36:51Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Differentially Private Accelerated Optimization Algorithms [0.7874708385247353]
We present two classes of differentially private optimization algorithms.
The first algorithm is inspired by Polyak's heavy ball method.
The second class of algorithms are based on Nesterov's accelerated gradient method.
arXiv Detail & Related papers (2020-08-05T08:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.