Observation Error Covariance Specification in Dynamical Systems for Data
assimilation using Recurrent Neural Networks
- URL: http://arxiv.org/abs/2111.06447v1
- Date: Thu, 11 Nov 2021 20:23:00 GMT
- Title: Observation Error Covariance Specification in Dynamical Systems for Data
assimilation using Recurrent Neural Networks
- Authors: Sibo Cheng, Mingming Qiu
- Abstract summary: We propose a data-driven approach based on long short term memory (LSTM) recurrent neural networks (RNN)
The proposed approach does not require any knowledge or assumption about prior error distribution.
We have compared the novel approach with two state-of-the-art covariance tuning algorithms, namely DI01 and D05.
- Score: 0.5330240017302621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data assimilation techniques are widely used to predict complex dynamical
systems with uncertainties, based on time-series observation data. Error
covariance matrices modelling is an important element in data assimilation
algorithms which can considerably impact the forecasting accuracy. The
estimation of these covariances, which usually relies on empirical assumptions
and physical constraints, is often imprecise and computationally expensive
especially for systems of large dimension. In this work, we propose a
data-driven approach based on long short term memory (LSTM) recurrent neural
networks (RNN) to improve both the accuracy and the efficiency of observation
covariance specification in data assimilation for dynamical systems. Learning
the covariance matrix from observed/simulated time-series data, the proposed
approach does not require any knowledge or assumption about prior error
distribution, unlike classical posterior tuning methods. We have compared the
novel approach with two state-of-the-art covariance tuning algorithms, namely
DI01 and D05, first in a Lorenz dynamical system and then in a 2D shallow water
twin experiments framework with different covariance parameterization using
ensemble assimilation. This novel method shows significant advantages in
observation covariance specification, assimilation accuracy and computational
efficiency.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A Causality-Based Learning Approach for Discovering the Underlying
Dynamics of Complex Systems from Partial Observations with Stochastic
Parameterization [1.2882319878552302]
This paper develops a new iterative learning algorithm for complex turbulent systems with partial observations.
It alternates between identifying model structures, recovering unobserved variables, and estimating parameters.
Numerical experiments show that the new algorithm succeeds in identifying the model structure and providing suitable parameterizations for many complex nonlinear systems.
arXiv Detail & Related papers (2022-08-19T00:35:03Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Generalised Latent Assimilation in Heterogeneous Reduced Spaces with
Machine Learning Surrogate Models [10.410970649045943]
We develop a system which combines reduced-order surrogate models with a novel data assimilation technique.
Generalised Latent Assimilation can benefit both the efficiency provided by the reduced-order modelling and the accuracy of data assimilation.
arXiv Detail & Related papers (2022-04-07T15:13:12Z) - Neural Ordinary Differential Equations for Nonlinear System
Identification [0.9864260997723973]
We present a study comparing NODE's performance against neural state-space models and classical linear system identification methods.
Experiments show that NODEs can consistently improve the prediction accuracy by an order of magnitude compared to benchmark methods.
arXiv Detail & Related papers (2022-02-28T22:25:53Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Data Assimilation Networks [1.5545257664210517]
Data assimilation aims at forecasting the state of a dynamical system by combining a mathematical representation of the system with noisy observations.
We propose a fully data driven deep learning architecture generalizing recurrent Elman networks and data assimilation algorithms.
Our architecture achieves comparable performance to EnKF on both the analysis and the propagation of probability density functions of the system state at a given time without using any explicit regularization technique.
arXiv Detail & Related papers (2020-10-19T17:35:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.