Latent Space Data Assimilation by using Deep Learning
- URL: http://arxiv.org/abs/2104.00430v1
- Date: Thu, 1 Apr 2021 12:25:55 GMT
- Title: Latent Space Data Assimilation by using Deep Learning
- Authors: Mathis Peyron, Anthony Fillion, Selime G\"urol, Victor Marchais, Serge
Gratton, Pierre Boudier and Gael Goret
- Abstract summary: Performing Data Assimilation (DA) at a low cost is of prime concern in Earth system modeling.
We incorporate Deep Learning (DL) methods into a DA framework.
We exploit the latent structure provided by autoencoders (AEs) to design an Ensemble Transform Kalman Filter with model error (ETKF-Q) in the latent space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Performing Data Assimilation (DA) at a low cost is of prime concern in Earth
system modeling, particularly at the time of big data where huge quantities of
observations are available. Capitalizing on the ability of Neural Networks
techniques for approximating the solution of PDE's, we incorporate Deep
Learning (DL) methods into a DA framework. More precisely, we exploit the
latent structure provided by autoencoders (AEs) to design an Ensemble Transform
Kalman Filter with model error (ETKF-Q) in the latent space. Model dynamics are
also propagated within the latent space via a surrogate neural network. This
novel ETKF-Q-Latent (thereafter referred to as ETKF-Q-L) algorithm is tested on
a tailored instructional version of Lorenz 96 equations, named the augmented
Lorenz 96 system: it possesses a latent structure that accurately represents
the observed dynamics. Numerical experiments based on this particular system
evidence that the ETKF-Q-L approach both reduces the computational cost and
provides better accuracy than state of the art algorithms, such as the ETKF-Q.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Balanced Neural ODEs: nonlinear model order reduction and Koopman operator approximations [0.0]
Variational Autoencoders (VAEs) are a powerful framework for learning compact latent representations.
NeuralODEs excel in learning transient system dynamics.
This work combines the strengths of both to create fast surrogate models with adjustable complexity.
arXiv Detail & Related papers (2024-10-14T05:45:52Z) - Parametric Taylor series based latent dynamics identification neural networks [0.3139093405260182]
A new latent identification of nonlinear dynamics, P-TLDINets, is introduced.
It relies on a novel neural network structure based on Taylor series expansion and ResNets.
arXiv Detail & Related papers (2024-10-05T15:10:32Z) - CGNSDE: Conditional Gaussian Neural Stochastic Differential Equation for Modeling Complex Systems and Data Assimilation [1.4322470793889193]
A new knowledge-based and machine learning hybrid modeling approach, called conditional neural differential equation (CGNSDE), is developed.
In contrast to the standard neural network predictive models, the CGNSDE is designed to effectively tackle both forward prediction tasks and inverse state estimation problems.
arXiv Detail & Related papers (2024-04-10T05:32:03Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Parameterized Consistency Learning-based Deep Polynomial Chaos Neural
Network Method for Reliability Analysis in Aerospace Engineering [3.541245871465521]
Polynomial chaos expansion (PCE) is a powerful surrogate model reliability analysis method in aerospace engineering.
To alleviate this problem, this paper proposes a parameterized consistency learning-based deep chaos neural network (Deep PCNN) method.
The Deep PCNN method can significantly reduce the training data cost in constructing a high-order PCE model.
arXiv Detail & Related papers (2022-03-29T15:15:12Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.