Neural Ordinary Differential Equations for Nonlinear System
Identification
- URL: http://arxiv.org/abs/2203.00120v1
- Date: Mon, 28 Feb 2022 22:25:53 GMT
- Title: Neural Ordinary Differential Equations for Nonlinear System
Identification
- Authors: Aowabin Rahman and J\'an Drgo\v{n}a and Aaron Tuor and Jan Strube
- Abstract summary: We present a study comparing NODE's performance against neural state-space models and classical linear system identification methods.
Experiments show that NODEs can consistently improve the prediction accuracy by an order of magnitude compared to benchmark methods.
- Score: 0.9864260997723973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural ordinary differential equations (NODE) have been recently proposed as
a promising approach for nonlinear system identification tasks. In this work,
we systematically compare their predictive performance with current
state-of-the-art nonlinear and classical linear methods. In particular, we
present a quantitative study comparing NODE's performance against neural
state-space models and classical linear system identification methods. We
evaluate the inference speed and prediction performance of each method on
open-loop errors across eight different dynamical systems. The experiments show
that NODEs can consistently improve the prediction accuracy by an order of
magnitude compared to benchmark methods. Besides improved accuracy, we also
observed that NODEs are less sensitive to hyperparameters compared to neural
state-space models. On the other hand, these performance gains come with a
slight increase of computation at the inference time.
Related papers
- Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Benchmarking sparse system identification with low-dimensional chaos [1.5849413067450229]
We systematically benchmark sparse regression variants by utilizing the dysts standardized database of chaotic systems.
We demonstrate how this open-source tool can be used to quantitatively compare different methods of system identification.
In all cases, we used ensembling to improve the noise robustness of SINDy and provide statistical comparisons.
arXiv Detail & Related papers (2023-02-04T18:49:52Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Observation Error Covariance Specification in Dynamical Systems for Data
assimilation using Recurrent Neural Networks [0.5330240017302621]
We propose a data-driven approach based on long short term memory (LSTM) recurrent neural networks (RNN)
The proposed approach does not require any knowledge or assumption about prior error distribution.
We have compared the novel approach with two state-of-the-art covariance tuning algorithms, namely DI01 and D05.
arXiv Detail & Related papers (2021-11-11T20:23:00Z) - Distributional Gradient Matching for Learning Uncertain Neural Dynamics
Models [38.17499046781131]
We propose a novel approach towards estimating uncertain neural ODEs, avoiding the numerical integration bottleneck.
Our algorithm - distributional gradient matching (DGM) - jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss.
Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
arXiv Detail & Related papers (2021-06-22T08:40:51Z) - Data Assimilation Networks [1.5545257664210517]
Data assimilation aims at forecasting the state of a dynamical system by combining a mathematical representation of the system with noisy observations.
We propose a fully data driven deep learning architecture generalizing recurrent Elman networks and data assimilation algorithms.
Our architecture achieves comparable performance to EnKF on both the analysis and the propagation of probability density functions of the system state at a given time without using any explicit regularization technique.
arXiv Detail & Related papers (2020-10-19T17:35:36Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.