Stabilized Neural Ordinary Differential Equations for Long-Time
Forecasting of Dynamical Systems
- URL: http://arxiv.org/abs/2203.15706v1
- Date: Tue, 29 Mar 2022 16:10:34 GMT
- Title: Stabilized Neural Ordinary Differential Equations for Long-Time
Forecasting of Dynamical Systems
- Authors: Alec J. Linot, Josh W. Burby, Qi Tang, Prasanna Balaprakash, Michael
D. Graham, Romit Maulik
- Abstract summary: We present a data-driven modeling method that accurately captures shocks and chaotic dynamics.
We learn the right-hand-side (SRH) of an ODE by adding the outputs of two NN together where one learns a linear term and the other a nonlinear term.
Specifically, we implement this by training a sparse linear convolutional NN to learn the linear term and a dense fully-connected nonlinear NN to learn the nonlinear term.
- Score: 1.001737665513683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In data-driven modeling of spatiotemporal phenomena careful consideration
often needs to be made in capturing the dynamics of the high wavenumbers. This
problem becomes especially challenging when the system of interest exhibits
shocks or chaotic dynamics. We present a data-driven modeling method that
accurately captures shocks and chaotic dynamics by proposing a novel
architecture, stabilized neural ordinary differential equation (ODE). In our
proposed architecture, we learn the right-hand-side (RHS) of an ODE by adding
the outputs of two NN together where one learns a linear term and the other a
nonlinear term. Specifically, we implement this by training a sparse linear
convolutional NN to learn the linear term and a dense fully-connected nonlinear
NN to learn the nonlinear term. This is in contrast with the standard neural
ODE which involves training only a single NN for learning the RHS. We apply
this setup to the viscous Burgers equation, which exhibits shocked behavior,
and show better short-time tracking and prediction of the energy spectrum at
high wavenumbers than a standard neural ODE. We also find that the stabilized
neural ODE models are much more robust to noisy initial conditions than the
standard neural ODE approach. We also apply this method to chaotic trajectories
of the Kuramoto-Sivashinsky equation. In this case, stabilized neural ODEs keep
long-time trajectories on the attractor, and are highly robust to noisy initial
conditions, while standard neural ODEs fail at achieving either of these
results. We conclude by demonstrating how stabilizing neural ODEs provide a
natural extension for use in reduced-order modeling by projecting the dynamics
onto the eigenvectors of the learned linear term.
Related papers
- Unconditional stability of a recurrent neural circuit implementing divisive normalization [0.0]
We prove the remarkable property of unconditional local stability for an arbitrary-dimensional ORGaNICs circuit.
We show that ORGaNICs can be trained by backpropagation through time without gradient clipping/scaling.
arXiv Detail & Related papers (2024-09-27T17:46:05Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems [2.170477444239546]
We develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS)
Our method builds on previous work modeling the latent state evolution via a differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs)
Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics.
arXiv Detail & Related papers (2024-07-19T15:32:15Z) - Symmetry-regularized neural ordinary differential equations [0.0]
This paper introduces new conservation relations in Neural ODEs using Lie symmetries in both the hidden state dynamics and the back propagation dynamics.
These conservation laws are then incorporated into the loss function as additional regularization terms, potentially enhancing the physical interpretability and generalizability of the model.
New loss functions are constructed from these conservation relations, demonstrating the applicability symmetry-regularized Neural ODE in typical modeling tasks.
arXiv Detail & Related papers (2023-11-28T09:27:44Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z) - How to train your neural ODE: the world of Jacobian and kinetic
regularization [7.83405844354125]
Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values.
We introduce a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well.
arXiv Detail & Related papers (2020-02-07T14:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.