Bayesian Calibration of imperfect computer models using Physics-informed
priors
- URL: http://arxiv.org/abs/2201.06463v1
- Date: Mon, 17 Jan 2022 15:16:26 GMT
- Title: Bayesian Calibration of imperfect computer models using Physics-informed
priors
- Authors: Michail Spitieris and Ingelin Steinsland
- Abstract summary: We introduce a computational efficient data-driven framework suitable for quantifying the uncertainty in physical parameters of computer models.
We extend this into a fully Bayesian framework which allows quantifying the uncertainty of physical parameters and model predictions.
This work is motivated by the need for interpretable parameters for the hemodynamics of the heart for personal treatment of hypertension.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we introduce a computational efficient data-driven framework
suitable for quantifying the uncertainty in physical parameters of computer
models, represented by differential equations. We construct physics-informed
priors for differential equations, which are multi-output Gaussian process (GP)
priors that encode the model's structure in the covariance function. We extend
this into a fully Bayesian framework which allows quantifying the uncertainty
of physical parameters and model predictions. Since physical models are usually
imperfect descriptions of the real process, we allow the model to deviate from
the observed data by considering a discrepancy function. For inference
Hamiltonian Monte Carlo (HMC) sampling is used. This work is motivated by the
need for interpretable parameters for the hemodynamics of the heart for
personal treatment of hypertension. The model used is the arterial Windkessel
model, which represents the hemodynamics of the heart through differential
equations with physically interpretable parameters of medical interest. As most
physical models, the Windkessel model is an imperfect description of the real
process. To demonstrate our approach we simulate noisy data from a more complex
physical model with known mathematical connections to our modeling choice. We
show that without accounting for discrepancy, the posterior of the physical
parameters deviates from the true value while when accounting for discrepancy
gives reasonable quantification of physical parameters uncertainty and reduces
the uncertainty in subsequent model predictions.
Related papers
- Parametric model reduction of mean-field and stochastic systems via higher-order action matching [1.1509084774278489]
We learn models of population dynamics of physical systems that feature gradient and mean-field effects.
We show that our approach accurately predicts population dynamics over a wide range of parameters and outperforms state-of-the-art diffusion-based and flow-based modeling.
arXiv Detail & Related papers (2024-10-15T19:05:28Z) - Quantification of total uncertainty in the physics-informed reconstruction of CVSim-6 physiology [1.6874375111244329]
This study investigates the decomposition of total uncertainty in the estimation of states and parameters of a differential system simulated with MC X-TFC.
MC X-TFC is applied to a six-compartment stiff ODE system, the CVSim-6 model, developed in the context of human physiology.
arXiv Detail & Related papers (2024-08-13T21:10:39Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Surrogate Modeling for Physical Systems with Preserved Properties and
Adjustable Tradeoffs [0.0]
We present a model-based and a data-driven strategy to generate surrogate models.
The latter generates interpretable surrogate models by fitting artificial relations to a presupposed topological structure.
Our framework is compatible with various spatial discretization schemes for distributed parameter models.
arXiv Detail & Related papers (2022-02-02T17:07:02Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Closed-form discovery of structural errors in models of chaotic systems
by integrating Bayesian sparse regression and data assimilation [0.0]
We introduce a framework named MEDIDA: Model Error Discovery with Interpretability and Data Assimilation.
In MEDIDA, first the model error is estimated from differences between the observed states and model-predicted states.
If observations are noisy, a data assimilation technique such as ensemble Kalman filter (EnKF) is first used to provide a noise-free analysis state of the system.
Finally, an equation-discovery technique, such as the relevance vector machine (RVM), a sparsity-promoting Bayesian method, is used to identify an interpretable, parsimonious, closed
arXiv Detail & Related papers (2021-10-01T17:19:28Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Augmenting Physical Models with Deep Networks for Complex Dynamics
Forecasting [34.61959169976758]
APHYNITY is a principled approach for augmenting incomplete physical dynamics described by differential equations with deep data-driven models.
It consists in decomposing the dynamics into two components: a physical component accounting for the dynamics for which we have some prior knowledge, and a data-driven component accounting for errors of the physical model.
arXiv Detail & Related papers (2020-10-09T09:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.