H-FEX: A Symbolic Learning Method for Hamiltonian Systems
- URL: http://arxiv.org/abs/2506.20607v1
- Date: Wed, 25 Jun 2025 16:53:01 GMT
- Title: H-FEX: A Symbolic Learning Method for Hamiltonian Systems
- Authors: Jasen Lai, Senwei Liang, Chunmei Wang,
- Abstract summary: Hamiltonian systems govern a class of dynamical systems governed by Hamiltonian functions.<n>We propose the Finite Expression Method for learning Hamiltonian Systems (H-FEX), a symbolic learning method that introduces novel interaction nodes.<n>Our experiments, including those on highly stiff dynamical systems, demonstrate that H-FEX can recover Hamiltonian functions of complex systems.
- Score: 2.4715271879679395
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Hamiltonian systems describe a broad class of dynamical systems governed by Hamiltonian functions, which encode the total energy and dictate the evolution of the system. Data-driven approaches, such as symbolic regression and neural network-based methods, provide a means to learn the governing equations of dynamical systems directly from observational data of Hamiltonian systems. However, these methods often struggle to accurately capture complex Hamiltonian functions while preserving energy conservation. To overcome this limitation, we propose the Finite Expression Method for learning Hamiltonian Systems (H-FEX), a symbolic learning method that introduces novel interaction nodes designed to capture intricate interaction terms effectively. Our experiments, including those on highly stiff dynamical systems, demonstrate that H-FEX can recover Hamiltonian functions of complex systems that accurately capture system dynamics and preserve energy over long time horizons. These findings highlight the potential of H-FEX as a powerful framework for discovering closed-form expressions of complex dynamical systems.
Related papers
- Learning Generalized Hamiltonians using fully Symplectic Mappings [0.32985979395737786]
Hamiltonian systems have the important property of being conservative, that is, energy is conserved throughout the evolution.<n>In particular Hamiltonian Neural Networks have emerged as a mechanism to incorporate structural inductive bias into the NN model.<n>We show that symplectic schemes are robust to noise and provide a good approximation of the system Hamiltonian when the state variables are sampled from a noisy observation.
arXiv Detail & Related papers (2024-09-17T12:45:49Z) - Coarse-Graining Hamiltonian Systems Using WSINDy [0.0]
We show that WSINDy can successfully identify a reduced Hamiltonian system in the presence of large intrinsics.
WSINDy naturally preserves the Hamiltonian structure by restricting to a trial basis of Hamiltonian vector fields.
We also provide a contribution to averaging theory by proving that first-order averaging at the level of vector fields preserves Hamiltonian structure in nearly-periodic Hamiltonian systems.
arXiv Detail & Related papers (2023-10-09T17:20:04Z) - Data-Driven Identification of Quadratic Representations for Nonlinear
Hamiltonian Systems using Weakly Symplectic Liftings [8.540823673172403]
This work is based on a lifting hypothesis, which posits that nonlinear Hamiltonian systems can be written as nonlinear systems with cubic Hamiltonians.
We propose a methodology to learn quadratic dynamical systems, enforcing the Hamiltonian structure in combination with a weakly-enforced symplectic auto-encoder.
arXiv Detail & Related papers (2023-08-02T11:26:33Z) - Learning Neural Hamiltonian Dynamics: A Methodological Overview [109.40968389896639]
Hamiltonian dynamics endows neural networks with accurate long-term prediction, interpretability, and data-efficient learning.
We systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies.
arXiv Detail & Related papers (2022-02-28T22:54:39Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Learning Hamiltonians of constrained mechanical systems [0.0]
Hamiltonian systems are an elegant and compact formalism in classical mechanics.
We propose new approaches for the accurate approximation of the Hamiltonian function of constrained mechanical systems.
arXiv Detail & Related papers (2022-01-31T14:03:17Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - DySMHO: Data-Driven Discovery of Governing Equations for Dynamical
Systems via Moving Horizon Optimization [77.34726150561087]
We introduce Discovery of Dynamical Systems via Moving Horizon Optimization (DySMHO), a scalable machine learning framework.
DySMHO sequentially learns the underlying governing equations from a large dictionary of basis functions.
Canonical nonlinear dynamical system examples are used to demonstrate that DySMHO can accurately recover the governing laws.
arXiv Detail & Related papers (2021-07-30T20:35:03Z) - Nonseparable Symplectic Neural Networks [23.77058934710737]
We propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs)
NSSNNs uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data.
We show the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems.
arXiv Detail & Related papers (2020-10-23T19:50:13Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.