Learning Neural Hamiltonian Dynamics: A Methodological Overview
- URL: http://arxiv.org/abs/2203.00128v1
- Date: Mon, 28 Feb 2022 22:54:39 GMT
- Title: Learning Neural Hamiltonian Dynamics: A Methodological Overview
- Authors: Zhijie Chen, Mingquan Feng, Junchi Yan, Hongyuan Zha
- Abstract summary: Hamiltonian dynamics endows neural networks with accurate long-term prediction, interpretability, and data-efficient learning.
We systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies.
- Score: 109.40968389896639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The past few years have witnessed an increased interest in learning
Hamiltonian dynamics in deep learning frameworks. As an inductive bias based on
physical laws, Hamiltonian dynamics endow neural networks with accurate
long-term prediction, interpretability, and data-efficient learning. However,
Hamiltonian dynamics also bring energy conservation or dissipation assumptions
on the input data and additional computational overhead. In this paper, we
systematically survey recently proposed Hamiltonian neural network models, with
a special emphasis on methodologies. In general, we discuss the major
contributions of these models, and compare them in four overlapping directions:
1) generalized Hamiltonian system; 2) symplectic integration, 3) generalized
input form, and 4) extended problem settings. We also provide an outlook of the
fundamental challenges and emerging opportunities in this area.
Related papers
- Injecting Hamiltonian Architectural Bias into Deep Graph Networks for Long-Range Propagation [55.227976642410766]
dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning.
Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks.
We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors.
arXiv Detail & Related papers (2024-05-27T13:36:50Z) - Separable Hamiltonian Neural Networks [1.8674308456443722]
Hamiltonian neural networks (HNNs) are state-of-the-art models that regress the vector field of a dynamical system.
We propose separable HNNs that embed additive separability within HNNs using observational, learning, and inductive biases.
arXiv Detail & Related papers (2023-09-03T03:54:43Z) - Applications of Machine Learning to Modelling and Analysing Dynamical
Systems [0.0]
We propose an architecture which combines existing Hamiltonian Neural Network structures into Adaptable Symplectic Recurrent Neural Networks.
This architecture is found to significantly outperform previously proposed neural networks when predicting Hamiltonian dynamics.
We show that this method works efficiently for single parameter potentials and provides accurate predictions even over long periods of time.
arXiv Detail & Related papers (2023-07-22T19:04:17Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Learning Hamiltonians of constrained mechanical systems [0.0]
Hamiltonian systems are an elegant and compact formalism in classical mechanics.
We propose new approaches for the accurate approximation of the Hamiltonian function of constrained mechanical systems.
arXiv Detail & Related papers (2022-01-31T14:03:17Z) - SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred
from Vision [73.26414295633846]
A recently proposed class of models attempts to learn latent dynamics from high-dimensional observations.
Existing methods rely on image reconstruction quality, which does not always reflect the quality of the learnt latent dynamics.
We develop a set of new measures, including a binary indicator of whether the underlying Hamiltonian dynamics have been faithfully captured.
arXiv Detail & Related papers (2021-11-10T23:26:58Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Nonseparable Symplectic Neural Networks [23.77058934710737]
We propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs)
NSSNNs uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data.
We show the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems.
arXiv Detail & Related papers (2020-10-23T19:50:13Z) - Sparse Symplectically Integrated Neural Networks [15.191984347149667]
We introduce Sparselectically Integrated Neural Networks (SSINNs)
SSINNs are a novel model for learning Hamiltonian dynamical systems from data.
We evaluate SSINNs on four classical Hamiltonian dynamical problems.
arXiv Detail & Related papers (2020-06-10T03:33:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.