Closed-form discovery of structural errors in models of chaotic systems
by integrating Bayesian sparse regression and data assimilation
- URL: http://arxiv.org/abs/2110.00546v1
- Date: Fri, 1 Oct 2021 17:19:28 GMT
- Title: Closed-form discovery of structural errors in models of chaotic systems
by integrating Bayesian sparse regression and data assimilation
- Authors: Rambod Mojgani, Ashesh Chattopadhyay, Pedram Hassanzadeh
- Abstract summary: We introduce a framework named MEDIDA: Model Error Discovery with Interpretability and Data Assimilation.
In MEDIDA, first the model error is estimated from differences between the observed states and model-predicted states.
If observations are noisy, a data assimilation technique such as ensemble Kalman filter (EnKF) is first used to provide a noise-free analysis state of the system.
Finally, an equation-discovery technique, such as the relevance vector machine (RVM), a sparsity-promoting Bayesian method, is used to identify an interpretable, parsimonious, closed
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Models used for many important engineering and natural systems are imperfect.
The discrepancy between the mathematical representations of a true physical
system and its imperfect model is called the model error. These model errors
can lead to substantial difference between the numerical solutions of the model
and the observations of the system, particularly in those involving nonlinear,
multi-scale phenomena. Thus, there is substantial interest in reducing model
errors, particularly through understanding their physics and sources and
leveraging the rapid growth of observational data. Here we introduce a
framework named MEDIDA: Model Error Discovery with Interpretability and Data
Assimilation. MEDIDA only requires a working numerical solver of the model and
a small number of noise-free or noisy sporadic observations of the system. In
MEDIDA, first the model error is estimated from differences between the
observed states and model-predicted states (the latter are obtained from a
number of one-time-step numerical integrations from the previous observed
states). If observations are noisy, a data assimilation (DA) technique such as
ensemble Kalman filter (EnKF) is first used to provide a noise-free analysis
state of the system, which is then used in estimating the model error. Finally,
an equation-discovery technique, such as the relevance vector machine (RVM), a
sparsity-promoting Bayesian method, is used to identify an interpretable,
parsimonious, closed-form representation of the model error. Using the chaotic
Kuramoto-Sivashinsky (KS) system as the test case, we demonstrate the excellent
performance of MEDIDA in discovering different types of structural/parametric
model errors, representing different types of missing physics, using noise-free
and noisy observations.
Related papers
- Physics-informed neural networks (PINNs) for numerical model error approximation and superresolution [3.4393226199074114]
We propose physics-informed neural networks (PINNs) for simultaneous numerical model error approximation and superresolution.
PINNs effectively predict model errors in both x and y displacement fields with small differences between predictions and ground truth.
Our findings demonstrate that the integration of physics-informed loss functions enables neural networks (NNs) to surpass a purely data-driven approach for approximating model errors.
arXiv Detail & Related papers (2024-11-14T17:03:09Z) - Embedded Nonlocal Operator Regression (ENOR): Quantifying model error in learning nonlocal operators [8.585650361148558]
We propose a new framework to learn a nonlocal homogenized surrogate model and its structural model error.
This framework provides discrepancy-adaptive uncertainty quantification for homogenized material response predictions in long-term simulations.
arXiv Detail & Related papers (2024-10-27T04:17:27Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Discrepancy Modeling Framework: Learning missing physics, modeling
systematic residuals, and disambiguating between deterministic and random
effects [4.459306403129608]
In modern dynamical systems, discrepancies between model and measurement can lead to poor quantification.
We introduce a discrepancy modeling framework to identify the missing physics and resolve the model-measurement mismatch.
arXiv Detail & Related papers (2022-03-10T05:37:24Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - A Framework for Machine Learning of Model Error in Dynamical Systems [7.384376731453594]
We present a unifying framework for blending mechanistic and machine-learning approaches to identify dynamical systems from data.
We cast the problem in both continuous- and discrete-time, for problems in which the model error is memoryless and in which it has significant memory.
We find that hybrid methods substantially outperform solely data-driven approaches in terms of data hunger, demands for model complexity, and overall predictive performance.
arXiv Detail & Related papers (2021-07-14T12:47:48Z) - Training Structured Mechanical Models by Minimizing Discrete
Euler-Lagrange Residual [36.52097893036073]
Structured Mechanical Models (SMMs) are a data-efficient black-box parameterization of mechanical systems.
We propose a methodology for fitting SMMs to data by minimizing the discrete Euler-Lagrange residual.
Experiments show that our methodology learns models that are better in accuracy to those of the conventional schemes for fitting SMMs.
arXiv Detail & Related papers (2021-05-05T00:44:01Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.