Learning Hamiltonian Systems with Mono-Implicit Runge-Kutta Methods
- URL: http://arxiv.org/abs/2303.03769v1
- Date: Tue, 7 Mar 2023 10:04:51 GMT
- Title: Learning Hamiltonian Systems with Mono-Implicit Runge-Kutta Methods
- Authors: H{\aa}kon Noren
- Abstract summary: We show that using mono-implicit Runge-Kutta methods of high order allows for accurate training of Hamiltonian neural networks on small datasets.
This is demonstrated by numerical experiments where the Hamiltonian of the chaotic double pendulum in addition to the Fermi-Pasta-Ulam-Tsingou system is learned from data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical integrators could be used to form interpolation conditions when
training neural networks to approximate the vector field of an ordinary
differential equation (ODE) from data. When numerical one-step schemes such as
the Runge-Kutta methods are used to approximate the temporal discretization of
an ODE with a known vector field, properties such as symmetry and stability are
much studied. Here, we show that using mono-implicit Runge-Kutta methods of
high order allows for accurate training of Hamiltonian neural networks on small
datasets. This is demonstrated by numerical experiments where the Hamiltonian
of the chaotic double pendulum in addition to the Fermi-Pasta-Ulam-Tsingou
system is learned from data.
Related papers
- Efficient Hamiltonian, structure and trace distance learning of Gaussian states [2.949446809950691]
We show that it is possible to learn the underlying interaction graph in a similar setting and sample complexity.
Our results put the status of the quantum Hamiltonian learning problem for continuous variable systems in a much more advanced state.
arXiv Detail & Related papers (2024-11-05T15:07:20Z) - Learning Dynamical Systems from Noisy Data with Inverse-Explicit
Integrators [0.0]
We introduce the mean inverse integrator (MII) to increase the accuracy when training neural networks to approximate vector fields from noisy data.
We show that the class of mono-implicit Runge-Kutta methods (MIRK) has particular advantages when used in connection with MII.
arXiv Detail & Related papers (2023-06-06T09:50:38Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Multiple shooting with neural differential equations [0.0]
This work experimentally demonstrates that if the data contains oscillations, then standard fitting of a neural differential equation may give flattened out trajectory that fails to describe the data.
We then introduce the multiple shooting method and present successful demonstrations of this method for the fitting of a neural differential equation to two datasets.
arXiv Detail & Related papers (2021-09-14T15:56:37Z) - Symplectic Gaussian Process Regression of Hamiltonian Flow Maps [0.8029049649310213]
We present an approach to construct appropriate and efficient emulators for Hamiltonian flow maps.
Intended future applications are long-term tracing of fast charged particles in accelerators and magnetic plasma confinement.
arXiv Detail & Related papers (2020-09-11T17:56:35Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.