High-order geometric integrators for the variational Gaussian
approximation
- URL: http://arxiv.org/abs/2306.17608v2
- Date: Tue, 8 Aug 2023 13:14:32 GMT
- Title: High-order geometric integrators for the variational Gaussian
approximation
- Authors: Roya Moghaddasi Fereidani and Ji\v{r}\'i J. L. Van\'i\v{c}ek
- Abstract summary: We show that the variational Gaussian approximation is time-reversible and conserves the norm and the symplectic structure exactly, regardless of the time step.
We also show that the variational method may capture tunneling and, in general, improves accuracy over the non-variational thawed Gaussian approximation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Among the single-trajectory Gaussian-based methods for solving the
time-dependent Schr\"{o}dinger equation, the variational Gaussian approximation
is the most accurate one. In contrast to Heller's original thawed Gaussian
approximation, it is symplectic, conserves energy exactly, and may partially
account for tunneling. However, the variational method is also much more
expensive. To improve its efficiency, we symmetrically compose the second-order
symplectic integrator of Faou and Lubich and obtain geometric integrators that
can achieve an arbitrary even order of convergence in the time step. We
demonstrate that the high-order integrators can speed up convergence
drastically compared to the second-order algorithm and, in contrast to the
popular fourth-order Runge-Kutta method, are time-reversible and conserve the
norm and the symplectic structure exactly, regardless of the time step. To show
that the method is not restricted to low-dimensional systems, we perform most
of the analysis on a non-separable twenty-dimensional model of coupled Morse
oscillators. We also show that the variational method may capture tunneling
and, in general, improves accuracy over the non-variational thawed Gaussian
approximation.
Related papers
- Distributed Optimization via Energy Conservation Laws in Dilated Coordinates [5.35599092568615]
This paper introduces an energy conservation approach for analyzing continuous-time dynamical systems in dilated coordinates.
convergence rates can be explicitly expressed in terms of the inverse time-dilation factor.
Its accelerated convergence behavior is benchmarked against various state-of-the-art distributed optimization algorithms on practical, large-scale problems.
arXiv Detail & Related papers (2024-09-28T08:02:43Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Family of Gaussian wavepacket dynamics methods from the perspective of a
nonlinear Schr\"odinger equation [0.0]
We show that several well-known Gaussian wavepacket dynamics methods, such as Heller's original thawed Gaussian approximation or Coalson and Karplus's variational Gaussian approximation, fit into this framework.
We study such a nonlinear Schr"odinger equation in general.
arXiv Detail & Related papers (2023-02-20T19:01:25Z) - Variational sparse inverse Cholesky approximation for latent Gaussian
processes via double Kullback-Leibler minimization [6.012173616364571]
We combine a variational approximation of the posterior with a similar and efficient SIC-restricted Kullback-Leibler-optimal approximation of the prior.
For this setting, our variational approximation can be computed via gradient descent in polylogarithmic time per iteration.
We provide numerical comparisons showing that the proposed double-Kullback-Leibler-optimal Gaussian-process approximation (DKLGP) can sometimes be vastly more accurate for stationary kernels than alternative approaches.
arXiv Detail & Related papers (2023-01-30T21:50:08Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A Unified Analysis of First-Order Methods for Smooth Games via Integral
Quadratic Constraints [10.578409461429626]
In this work, we adapt the integral quadratic constraints theory to first-order methods for smooth and strongly-varying games and iteration.
We provide emphfor the first time a global convergence rate for the negative momentum method(NM) with an complexity $mathcalO(kappa1.5)$, which matches its known lower bound.
We show that it is impossible for an algorithm with one step of memory to achieve acceleration if it only queries the gradient once per batch.
arXiv Detail & Related papers (2020-09-23T20:02:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.