Combining the band-limited parameterization and Semi-Lagrangian
Runge--Kutta integration for efficient PDE-constrained LDDMM
- URL: http://arxiv.org/abs/2006.06823v1
- Date: Wed, 10 Jun 2020 07:12:18 GMT
- Title: Combining the band-limited parameterization and Semi-Lagrangian
Runge--Kutta integration for efficient PDE-constrained LDDMM
- Authors: Monica Hernandez
- Abstract summary: Original combination of Gauss--Newton--Krylov optimization and Runge--Kutta integration, shows excellent numerical accuracy and fast convergence rate.
However, its most significant limitation is the huge computational complexity.
This limitation has been treated independently by the problem formulation in the space of band-limited vector fields and Semi-Lagrangian integration.
- Score: 1.370633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The family of PDE-constrained LDDMM methods is emerging as a particularly
interesting approach for physically meaningful diffeomorphic transformations.
The original combination of Gauss--Newton--Krylov optimization and Runge--Kutta
integration, shows excellent numerical accuracy and fast convergence rate.
However, its most significant limitation is the huge computational complexity,
hindering its extensive use in Computational Anatomy applied studies. This
limitation has been treated independently by the problem formulation in the
space of band-limited vector fields and Semi-Lagrangian integration. The
purpose of this work is to combine both in three variants of band-limited
PDE-constrained LDDMM for further increasing their computational efficiency.
The accuracy of the resulting methods is evaluated extensively. For all the
variants, the proposed combined approach shows a significant increment of the
computational efficiency. In addition, the variant based on the deformation
state equation is positioned consistently as the best performing method across
all the evaluation frameworks in terms of accuracy and efficiency.
Related papers
- Stochastic Optimization with Optimal Importance Sampling [49.484190237840714]
We propose an iterative-based algorithm that jointly updates the decision and the IS distribution without requiring time-scale separation between the two.
Our method achieves the lowest possible variable variance and guarantees global convergence under convexity of the objective and mild assumptions on the IS distribution family.
arXiv Detail & Related papers (2025-04-04T16:10:18Z) - An Enhanced Zeroth-Order Stochastic Frank-Wolfe Framework for Constrained Finite-Sum Optimization [15.652261277429968]
We propose an enhanced zeroth-order convex computation Frank-Wolfe to address constrained finite-sum optimization problems.
Our method introduces a novel double variance reduction framework that effectively reduces the approximation induced by zeroth-order oracles.
arXiv Detail & Related papers (2025-01-13T10:53:19Z) - A Hybrid Kernel-Free Boundary Integral Method with Operator Learning for Solving Parametric Partial Differential Equations In Complex Domains [0.0]
Kernel-Free Boundary Integral (KFBI) method presents an iterative solution to boundary integral equations arising from elliptic partial differential equations (PDEs)
We propose a hybrid KFBI method, integrating the foundational principles of the KFBI method with the capabilities of deep learning.
arXiv Detail & Related papers (2024-04-23T17:25:35Z) - Enhancing Low-Order Discontinuous Galerkin Methods with Neural Ordinary Differential Equations for Compressible Navier--Stokes Equations [0.1578515540930834]
We introduce an end-to-end differentiable framework for solving the compressible Navier-Stokes equations.
This integrated approach combines a differentiable discontinuous Galerkin solver with a neural network source term.
We demonstrate the performance of the proposed framework through two examples.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - Moreau Envelope ADMM for Decentralized Weakly Convex Optimization [55.2289666758254]
This paper proposes a proximal variant of the alternating direction method of multipliers (ADMM) for distributed optimization.
The results of our numerical experiments indicate that our method is faster and more robust than widely-used approaches.
arXiv Detail & Related papers (2023-08-31T14:16:30Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach [1.9030954416586594]
We study the combination of the alternating direction method of multipliers (ADMM) with physics-informed neural networks (PINNs)
The resulting ADMM-PINNs algorithmic framework substantially enlarges the applicable range of PINNs to nonsmooth cases of PDE-constrained optimization problems.
We validate the efficiency of the ADMM-PINNs algorithmic framework by different prototype applications.
arXiv Detail & Related papers (2023-02-16T14:17:30Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Converting ADMM to a Proximal Gradient for Convex Optimization Problems [4.56877715768796]
In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.
This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex.
We show by numerical experiments that we can obtain a significant improvement in terms of efficiency.
arXiv Detail & Related papers (2021-04-22T07:41:12Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - Approximate Dynamics Lead to More Optimal Control: Efficient Exact
Derivatives [0.0]
We show here that the computational feasibility of meeting this accuracy requirement depends on the choice of propagation scheme and problem representation.
This methodology allows numerically efficient optimization of very high-dimensional dynamics.
arXiv Detail & Related papers (2020-05-20T10:02:19Z) - Model Reduction and Neural Networks for Parametric PDEs [9.405458160620533]
We develop a framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
The proposed approach is motivated by the recent successes of neural networks and deep learning.
For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology.
arXiv Detail & Related papers (2020-05-07T00:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.