Port-Hamiltonian Neural Networks for Learning Explicit Time-Dependent
Dynamical Systems
- URL: http://arxiv.org/abs/2107.08024v1
- Date: Fri, 16 Jul 2021 17:31:54 GMT
- Title: Port-Hamiltonian Neural Networks for Learning Explicit Time-Dependent
Dynamical Systems
- Authors: Shaan Desai, Marios Mattheakis, David Sondak, Pavlos Protopapas and
Stephen Roberts
- Abstract summary: Accurately learning the temporal behavior of dynamical systems requires models with well-chosen learning biases.
Recent innovations embed the Hamiltonian and Lagrangian formalisms into neural networks.
We show that the proposed emphport-Hamiltonian neural network can efficiently learn the dynamics of nonlinear physical systems of practical interest.
- Score: 2.6084034060847894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately learning the temporal behavior of dynamical systems requires
models with well-chosen learning biases. Recent innovations embed the
Hamiltonian and Lagrangian formalisms into neural networks and demonstrate a
significant improvement over other approaches in predicting trajectories of
physical systems. These methods generally tackle autonomous systems that depend
implicitly on time or systems for which a control signal is known apriori.
Despite this success, many real world dynamical systems are non-autonomous,
driven by time-dependent forces and experience energy dissipation. In this
study, we address the challenge of learning from such non-autonomous systems by
embedding the port-Hamiltonian formalism into neural networks, a versatile
framework that can capture energy dissipation and time-dependent control
forces. We show that the proposed \emph{port-Hamiltonian neural network} can
efficiently learn the dynamics of nonlinear physical systems of practical
interest and accurately recover the underlying stationary Hamiltonian,
time-dependent force, and dissipative coefficient. A promising outcome of our
network is its ability to learn and predict chaotic systems such as the Duffing
equation, for which the trajectories are typically hard to learn.
Related papers
- Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - TANGO: Time-Reversal Latent GraphODE for Multi-Agent Dynamical Systems [43.39754726042369]
We propose a simple-yet-effective self-supervised regularization term as a soft constraint that aligns the forward and backward trajectories predicted by a continuous graph neural network-based ordinary differential equation (GraphODE)
It effectively imposes time-reversal symmetry to enable more accurate model predictions across a wider range of dynamical systems under classical mechanics.
Experimental results on a variety of physical systems demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-10-10T08:52:16Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Constants of motion network [0.0]
We present a neural network that can simultaneously learn the dynamics of the system and the constants of motion from data.
By exploiting the discovered constants of motion, it can produce better predictions on dynamics.
arXiv Detail & Related papers (2022-08-22T15:07:48Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Gradient-Enhanced Physics-Informed Neural Networks for Power Systems
Operational Support [36.96271320953622]
This paper introduces a machine learning method to approximate the behavior of power systems dynamics in near real time.
The proposed framework is based on gradient-enhanced physics-informed neural networks (gPINNs) and encodes the underlying physical laws governing power systems.
arXiv Detail & Related papers (2022-06-21T17:56:55Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Learning reversible symplectic dynamics [0.0]
We propose a new neural network architecture for learning time-reversible dynamical systems from data.
We focus on an adaptation to symplectic systems, because of their importance in physics-informed learning.
arXiv Detail & Related papers (2022-04-26T14:07:40Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.