Deep Learning of Chaotic Systems from Partially-Observed Data
- URL: http://arxiv.org/abs/2205.08384v1
- Date: Thu, 12 May 2022 00:18:06 GMT
- Title: Deep Learning of Chaotic Systems from Partially-Observed Data
- Authors: Victor Churchill, Dongbin Xiu
- Abstract summary: A general data driven numerical framework has been developed for learning and modeling unknown dynamical systems using fully- or partially-observed data.
In this paper, we apply this framework to chaotic systems, in particular the well-known Lorenz 63 and 96 systems.
We demonstrate that the flow map based DNN learning method is capable of accurately modeling chaotic systems, even when only a subset of the state variables are available to the DNNs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, a general data driven numerical framework has been developed for
learning and modeling of unknown dynamical systems using fully- or
partially-observed data. The method utilizes deep neural networks (DNNs) to
construct a model for the flow map of the unknown system. Once an accurate DNN
approximation of the flow map is constructed, it can be recursively executed to
serve as an effective predictive model of the unknown system. In this paper, we
apply this framework to chaotic systems, in particular the well-known Lorenz 63
and 96 systems, and critically examine the predictive performance of the
approach. A distinct feature of chaotic systems is that even the smallest
perturbations will lead to large (albeit bounded) deviations in the solution
trajectories. This makes long-term predictions of the method, or any data
driven methods, questionable, as the local model accuracy will eventually
degrade and lead to large pointwise errors. Here we employ several other
qualitative and quantitative measures to determine whether the chaotic dynamics
have been learned. These include phase plots, histograms, autocorrelation,
correlation dimension, approximate entropy, and Lyapunov exponent. Using these
measures, we demonstrate that the flow map based DNN learning method is capable
of accurately modeling chaotic systems, even when only a subset of the state
variables are available to the DNNs. For example, for the Lorenz 96 system with
40 state variables, when data of only 3 variables are available, the method is
able to learn an effective DNN model for the 3 variables and produce accurately
the chaotic behavior of the system.
Related papers
- Modeling Unknown Stochastic Dynamical System Subject to External Excitation [4.357350642401934]
We present a numerical method for learning unknown nonautonomous dynamical system.
Our basic assumption is that the governing equations for the system are unavailable.
When a sufficient amount of such I/O data are available, our method is capable of learning the unknown dynamics.
arXiv Detail & Related papers (2024-06-22T06:21:44Z) - Modeling Unknown Stochastic Dynamical System via Autoencoder [3.8769921482808116]
We present a numerical method to learn an accurate predictive model for an unknown dynamical system from its trajectory data.
It employs the idea of autoencoder to identify the unobserved latent random variables.
It is also applicable to systems driven by non-Gaussian noises.
arXiv Detail & Related papers (2023-12-15T18:19:22Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Modeling unknown dynamical systems with hidden parameters [0.0]
We present a data-driven numerical approach for modeling unknown dynamical systems with missing/hidden parameters.
The method is based on training a deep neural network (DNN) model for the unknown system using its trajectory data.
arXiv Detail & Related papers (2022-02-03T21:34:58Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint [9.36739413306697]
We discuss a methodology to learn differential equation(s) using noisy and sparsely sampled measurements.
In our methodology, the main innovation can be seen in of integration of deep neural networks with a classical numerical integration method.
arXiv Detail & Related papers (2021-09-23T15:43:45Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Data Assimilation Networks [1.5545257664210517]
Data assimilation aims at forecasting the state of a dynamical system by combining a mathematical representation of the system with noisy observations.
We propose a fully data driven deep learning architecture generalizing recurrent Elman networks and data assimilation algorithms.
Our architecture achieves comparable performance to EnKF on both the analysis and the propagation of probability density functions of the system state at a given time without using any explicit regularization technique.
arXiv Detail & Related papers (2020-10-19T17:35:36Z) - Active Learning for Nonlinear System Identification with Guarantees [102.43355665393067]
We study a class of nonlinear dynamical systems whose state transitions depend linearly on a known feature embedding of state-action pairs.
We propose an active learning approach that achieves this by repeating three steps: trajectory planning, trajectory tracking, and re-estimation of the system from all available data.
We show that our method estimates nonlinear dynamical systems at a parametric rate, similar to the statistical rate of standard linear regression.
arXiv Detail & Related papers (2020-06-18T04:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.