Symplectic Learning for Hamiltonian Neural Networks
- URL: http://arxiv.org/abs/2106.11753v2
- Date: Mon, 23 Oct 2023 15:01:35 GMT
- Title: Symplectic Learning for Hamiltonian Neural Networks
- Authors: Marco David and Florian M\'ehats
- Abstract summary: Hamiltonian Neural Networks (HNNs) took a first step towards a unified "gray box" approach.
We exploit the symplectic structure of Hamiltonian systems with a different loss function.
We mathematically guarantee the existence of an exact Hamiltonian function which the HNN can learn.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning methods are widely used in the natural sciences to model and
predict physical systems from observation data. Yet, they are often used as
poorly understood "black boxes," disregarding existing mathematical structure
and invariants of the problem. Recently, the proposal of Hamiltonian Neural
Networks (HNNs) took a first step towards a unified "gray box" approach, using
physical insight to improve performance for Hamiltonian systems. In this paper,
we explore a significantly improved training method for HNNs, exploiting the
symplectic structure of Hamiltonian systems with a different loss function.
This frees the loss from an artificial lower bound. We mathematically guarantee
the existence of an exact Hamiltonian function which the HNN can learn. This
allows us to prove and numerically analyze the errors made by HNNs which, in
turn, renders them fully explainable. Finally, we present a novel post-training
correction to obtain the true Hamiltonian only from discretized observation
data, up to an arbitrary order.
Related papers
- Learning Generalized Hamiltonians using fully Symplectic Mappings [0.32985979395737786]
Hamiltonian systems have the important property of being conservative, that is, energy is conserved throughout the evolution.
In particular Hamiltonian Neural Networks have emerged as a mechanism to incorporate structural inductive bias into the NN model.
We show that symplectic schemes are robust to noise and provide a good approximation of the system Hamiltonian when the state variables are sampled from a noisy observation.
arXiv Detail & Related papers (2024-09-17T12:45:49Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - Separable Hamiltonian Neural Networks [1.8674308456443722]
Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system.
We propose separable HNNs that embed additive separability within HNNs using observational, learning, and inductive biases.
arXiv Detail & Related papers (2023-09-03T03:54:43Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Learning Neural Hamiltonian Dynamics: A Methodological Overview [109.40968389896639]
Hamiltonian dynamics endows neural networks with accurate long-term prediction, interpretability, and data-efficient learning.
We systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies.
arXiv Detail & Related papers (2022-02-28T22:54:39Z) - A unified framework for Hamiltonian deep neural networks [3.0934684265555052]
Training deep neural networks (DNNs) can be difficult due to vanishing/exploding gradients during weight optimization.
We propose a class of DNNs stemming from the time discretization of Hamiltonian systems.
The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures.
arXiv Detail & Related papers (2021-04-27T13:20:24Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Sparse Symplectically Integrated Neural Networks [15.191984347149667]
We introduce Sparselectically Integrated Neural Networks (SSINNs)
SSINNs are a novel model for learning Hamiltonian dynamical systems from data.
We evaluate SSINNs on four classical Hamiltonian dynamical problems.
arXiv Detail & Related papers (2020-06-10T03:33:37Z) - Hamiltonian neural networks for solving equations of motion [3.1498833540989413]
We present a Hamiltonian neural network that solves the differential equations that govern dynamical systems.
A symplectic Euler integrator requires two orders more evaluation points than the Hamiltonian network in order to achieve the same order of the numerical error.
arXiv Detail & Related papers (2020-01-29T21:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.