Symplectic Neural Networks in Taylor Series Form for Hamiltonian Systems
- URL: http://arxiv.org/abs/2005.04986v4
- Date: Sun, 20 Feb 2022 01:20:28 GMT
- Title: Symplectic Neural Networks in Taylor Series Form for Hamiltonian Systems
- Authors: Yunjin Tong, Shiying Xiong, Xingzhe He, Guanghan Pan, Bo Zhu
- Abstract summary: We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets)
We conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations.
We demonstrate the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the H'enon--Heiles systems.
- Score: 15.523425139375226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an effective and lightweight learning algorithm, Symplectic Taylor
Neural Networks (Taylor-nets), to conduct continuous, long-term predictions of
a complex Hamiltonian dynamic system based on sparse, short-term observations.
At the heart of our algorithm is a novel neural network architecture consisting
of two sub-networks. Both are embedded with terms in the form of Taylor series
expansion designed with symmetric structure. The key mechanism underpinning our
infrastructure is the strong expressiveness and special symmetric property of
the Taylor series expansion, which naturally accommodate the numerical fitting
process of the gradients of the Hamiltonian with respect to the generalized
coordinates as well as preserve its symplectic structure. We further
incorporate a fourth-order symplectic integrator in conjunction with neural
ODEs' framework into our Taylor-net architecture to learn the continuous-time
evolution of the target systems while simultaneously preserving their
symplectic structures. We demonstrated the efficacy of our Taylor-net in
predicting a broad spectrum of Hamiltonian dynamic systems, including the
pendulum, the Lotka--Volterra, the Kepler, and the H\'enon--Heiles systems. Our
model exhibits unique computational merits by outperforming previous methods to
a great extent regarding the prediction accuracy, the convergence rate, and the
robustness despite using extremely small training data with a short training
period (6000 times shorter than the predicting period), small sample sizes, and
no intermediate data to train the networks.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Systematic construction of continuous-time neural networks for linear dynamical systems [0.0]
We discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems.
We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE)
Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system.
arXiv Detail & Related papers (2024-03-24T16:16:41Z) - Applications of Machine Learning to Modelling and Analysing Dynamical
Systems [0.0]
We propose an architecture which combines existing Hamiltonian Neural Network structures into Adaptable Symplectic Recurrent Neural Networks.
This architecture is found to significantly outperform previously proposed neural networks when predicting Hamiltonian dynamics.
We show that this method works efficiently for single parameter potentials and provides accurate predictions even over long periods of time.
arXiv Detail & Related papers (2023-07-22T19:04:17Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Nonseparable Symplectic Neural Networks [23.77058934710737]
We propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs)
NSSNNs uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data.
We show the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems.
arXiv Detail & Related papers (2020-10-23T19:50:13Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - Sparse Symplectically Integrated Neural Networks [15.191984347149667]
We introduce Sparselectically Integrated Neural Networks (SSINNs)
SSINNs are a novel model for learning Hamiltonian dynamical systems from data.
We evaluate SSINNs on four classical Hamiltonian dynamical problems.
arXiv Detail & Related papers (2020-06-10T03:33:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.