Integrating Recurrent Neural Networks with Data Assimilation for
Scalable Data-Driven State Estimation
- URL: http://arxiv.org/abs/2109.12269v1
- Date: Sat, 25 Sep 2021 03:56:53 GMT
- Title: Integrating Recurrent Neural Networks with Data Assimilation for
Scalable Data-Driven State Estimation
- Authors: Stephen G. Penny, Timothy A. Smith, Tse-Chun Chen, Jason A. Platt,
Hsin-Yi Lin, Michael Goodliff, Henry D.I. Abarbanel
- Abstract summary: Data assimilation (DA) is integrated with machine learning to perform entirely data-driven online state estimation.
recurrent neural networks (RNNs) are implemented as surrogate models to replace key components of the DA cycle in numerical weather prediction (NWP)
It is shown how these RNNs can be using DA methods to directly update the hidden/reservoir state with observations of the target system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data assimilation (DA) is integrated with machine learning in order to
perform entirely data-driven online state estimation. To achieve this,
recurrent neural networks (RNNs) are implemented as surrogate models to replace
key components of the DA cycle in numerical weather prediction (NWP), including
the conventional numerical forecast model, the forecast error covariance
matrix, and the tangent linear and adjoint models. It is shown how these RNNs
can be initialized using DA methods to directly update the hidden/reservoir
state with observations of the target system. The results indicate that these
techniques can be applied to estimate the state of a system for the repeated
initialization of short-term forecasts, even in the absence of a traditional
numerical forecast model. Further, it is demonstrated how these integrated
RNN-DA methods can scale to higher dimensions by applying domain localization
and parallelization, providing a path for practical applications in NWP.
Related papers
- Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Statistical process monitoring of artificial neural networks [1.3213490507208525]
In machine learning, the learned relationship between the input and the output must remain valid during the model's deployment.
We propose considering the latent feature representation of the data (called "embedding") generated by the ANN to determine the time when the data stream starts being nonstationary.
arXiv Detail & Related papers (2022-09-15T16:33:36Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - On the adaptation of recurrent neural networks for system identification [2.5234156040689237]
This paper presents a transfer learning approach which enables fast and efficient adaptation of Recurrent Neural Network (RNN) models of dynamical systems.
The system dynamics are then assumed to change, leading to an unacceptable degradation of the nominal model performance on the perturbed system.
To cope with the mismatch, the model is augmented with an additive correction term trained on fresh data from the new dynamic regime.
arXiv Detail & Related papers (2022-01-21T12:04:17Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Handling Missing Observations with an RNN-based Prediction-Update Cycle [10.478312054103975]
In tasks such as tracking, time-series data inevitably carry missing observations.
This paper introduces an RNN-based approach that provides a full temporal filtering cycle for motion state estimation.
arXiv Detail & Related papers (2021-03-22T11:55:10Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.