Controlled oscillation modeling using port-Hamiltonian neural networks
- URL: http://arxiv.org/abs/2602.15704v1
- Date: Tue, 17 Feb 2026 16:38:41 GMT
- Title: Controlled oscillation modeling using port-Hamiltonian neural networks
- Authors: Maximino Linares, Guillaume Doras, Thomas Hélie,
- Abstract summary: We propose a second-order discrete gradient method embedded in the learning of dynamical systems with port-Hamiltonian neural networks.<n>We show how the use of this discrete gradient method outperforms the performance of a Runge-Kutta method of the same order.
- Score: 0.30586855806896035
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning dynamical systems through purely data-driven methods is challenging as they do not learn the underlying conservation laws that enable them to correctly generalize. Existing port-Hamiltonian neural network methods have recently been successfully applied for modeling mechanical systems. However, even though these methods are designed on power-balance principles, they usually do not consider power-preserving discretizations and often rely on Runge-Kutta numerical methods. In this work, we propose to use a second-order discrete gradient method embedded in the learning of dynamical systems with port-Hamiltonian neural networks. Numerical results are provided for three systems deliberately selected to span different ranges of dynamical behavior under control: a baseline harmonic oscillator with quadratic energy storage; a Duffing oscillator, with a non-quadratic Hamiltonian offering amplitude-dependent effects; and a self-sustained oscillator, which can stabilize in a controlled limit cycle through the incorporation of a nonlinear dissipation. We show how the use of this discrete gradient method outperforms the performance of a Runge-Kutta method of the same order. Experiments are also carried out to compare two theoretically equivalent port-Hamiltonian systems formulations and to analyze the impact of regularizing the Jacobian of port-Hamiltonian neural networks during training.
Related papers
- Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning [52.26396748560348]
We provide an overview of high dimensional dynamical systems driven by random matrices.<n>We focus on applications to simple models of learning and generalization in machine learning theory.
arXiv Detail & Related papers (2026-01-03T00:12:32Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Learning Nonlinear Dynamics in Physical Modelling Synthesis using Neural Ordinary Differential Equations [13.755383470312001]
A modal decomposition leads to a coupled nonlinear system of ordinary differential equations.<n>Recent work in applied machine learning approaches has been used to model lumped dynamic systems automatically from data.<n>We show that the model can be trained to reproduce the nonlinear dynamics of the system.
arXiv Detail & Related papers (2025-05-15T17:17:21Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems [2.170477444239546]
We develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS)<n>Our method builds on previous work modeling the latent state evolution via a differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs)<n>Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics.
arXiv Detail & Related papers (2024-07-19T15:32:15Z) - Using system-reservoir methods to derive effective field theories for
broadband nonlinear quantum optics: a case study on cascaded quadratic
nonlinearities [0.0]
nonlinear interactions among a large number of frequency components induce complex dynamics that may defy analysis.
We introduce a perturbative framework for factoring out reservoir degrees of freedom and establishing a concise effective model.
Our results highlight the utility of system-reservoir methods for deriving accurate, intuitive reduced models.
arXiv Detail & Related papers (2023-11-06T23:00:47Z) - A diagrammatic method to compute the effective Hamiltonian of driven nonlinear oscillators [0.0]
We present a new diagrammatic method for computing the effective Hamiltonian of driven nonlinear oscillators.<n>We show the consistency of our schemes with existing perturbation methods such as the Schrieffer-Wolff method.<n>Our method contributes to the understanding of dynamic control within quantum systems and achieves precision essential for advancing future quantum information processors.
arXiv Detail & Related papers (2023-04-26T16:31:21Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Constrained Block Nonlinear Neural Dynamical Models [1.3163098563588727]
Neural network modules conditioned by known priors can be effectively trained and combined to represent systems with nonlinear dynamics.
The proposed method consists of neural network blocks that represent input, state, and output dynamics with constraints placed on the network weights and system variables.
We evaluate the performance of the proposed architecture and training methods on system identification tasks for three nonlinear systems.
arXiv Detail & Related papers (2021-01-06T04:27:54Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.