Learning Hamiltonian Dynamics with Reproducing Kernel Hilbert Spaces and Random Features
- URL: http://arxiv.org/abs/2404.07703v2
- Date: Thu, 24 Oct 2024 11:01:00 GMT
- Title: Learning Hamiltonian Dynamics with Reproducing Kernel Hilbert Spaces and Random Features
- Authors: Torbjørn Smith, Olav Egeland,
- Abstract summary: The performance of the method is validated in simulations for three Hamiltonian systems.
It is demonstrated that the use of an odd symplectic kernel improves prediction accuracy and data efficiency.
- Score: 0.7510165488300369
- License:
- Abstract: A method for learning Hamiltonian dynamics from a limited and noisy dataset is proposed. The method learns a Hamiltonian vector field on a reproducing kernel Hilbert space (RKHS) of inherently Hamiltonian vector fields, and in particular, odd Hamiltonian vector fields. This is done with a symplectic kernel, and it is shown how the kernel can be modified to an odd symplectic kernel to impose the odd symmetry. A random feature approximation is developed for the proposed odd kernel to reduce the problem size. The performance of the method is validated in simulations for three Hamiltonian systems. It is demonstrated that the use of an odd symplectic kernel improves prediction accuracy and data efficiency, and that the learned vector fields are Hamiltonian and exhibit the imposed odd symmetry characteristics.
Related papers
- Learning dissipative Hamiltonian dynamics with reproducing kernel Hilbert spaces and random Fourier features [0.7510165488300369]
This paper presents a new method for learning dissipative Hamiltonian dynamics from a limited and noisy dataset.
The performance of the method is validated in simulations for two dissipative Hamiltonian systems.
arXiv Detail & Related papers (2024-10-24T11:35:39Z) - A Structure-Preserving Kernel Method for Learning Hamiltonian Systems [3.594638299627404]
A structure-preserving kernel ridge regression method is presented that allows the recovery of potentially high-dimensional and nonlinear Hamiltonian functions.
The paper extends kernel regression methods to problems in which loss functions involving linear functions of gradients are required.
A full error analysis is conducted that provides convergence rates using fixed and adaptive regularization parameters.
arXiv Detail & Related papers (2024-03-15T07:20:21Z) - Learning of Hamiltonian Dynamics with Reproducing Kernel Hilbert Spaces [0.7510165488300369]
This paper presents a method for learning Hamiltonian dynamics from a limited set of data points.
The Hamiltonian vector field is found by regularized optimization over a reproducing kernel Hilbert space of vector fields that are inherently Hamiltonian.
The performance of the method is validated in simulations for two Hamiltonian systems.
arXiv Detail & Related papers (2023-12-15T12:19:48Z) - Vectorization of the density matrix and quantum simulation of the von
Neumann equation of time-dependent Hamiltonians [65.268245109828]
We develop a general framework to linearize the von-Neumann equation rendering it in a suitable form for quantum simulations.
We show that one of these linearizations of the von-Neumann equation corresponds to the standard case in which the state vector becomes the column stacked elements of the density matrix.
A quantum algorithm to simulate the dynamics of the density matrix is proposed.
arXiv Detail & Related papers (2023-06-14T23:08:51Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Topological spin excitations in non-Hermitian spin chains with a
generalized kernel polynomial algorithm [1.054316838380053]
We show a numerical approach to compute spectral functions of a non-Hermitian many-body system.
We show that the local spectral functions reveal topological spin excitations in a non-Hermitian spin model.
Our method offers an efficient way to compute local spectral functions in non-Hermitian quantum many-body models.
arXiv Detail & Related papers (2022-08-12T18:00:07Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Hamiltonian simulation with random inputs [74.82351543483588]
Theory of average-case performance of Hamiltonian simulation with random initial states.
Numerical evidence suggests that this theory accurately characterizes the average error for concrete models.
arXiv Detail & Related papers (2021-11-08T19:08:42Z) - High-Dimensional Gaussian Process Inference with Derivatives [90.8033626920884]
We show that in the low-data regime $ND$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $mathcalO(N2D + (N2)3)$.
We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
arXiv Detail & Related papers (2021-02-15T13:24:41Z) - Symplectic Gaussian Process Regression of Hamiltonian Flow Maps [0.8029049649310213]
We present an approach to construct appropriate and efficient emulators for Hamiltonian flow maps.
Intended future applications are long-term tracing of fast charged particles in accelerators and magnetic plasma confinement.
arXiv Detail & Related papers (2020-09-11T17:56:35Z) - Stochastic Hamiltonian Gradient Methods for Smooth Games [51.47367707388402]
We focus on the class of Hamiltonian methods and provide the first convergence guarantees for certain classes of smooth games.
Using tools from the optimization literature we show that SHGD converges linearly to the neighbourhood of a gradient.
Our results provide the first global non-asymotic last-rate convergence guarantees for the class of general games.
arXiv Detail & Related papers (2020-07-08T15:42:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.