Considering discrepancy when calibrating a mechanistic electrophysiology
model
- URL: http://arxiv.org/abs/2001.04230v2
- Date: Thu, 23 Apr 2020 13:50:13 GMT
- Title: Considering discrepancy when calibrating a mechanistic electrophysiology
model
- Authors: Chon Lok Lei, Sanmitra Ghosh, Dominic G. Whittaker, Yasser
Aboelkassem, Kylie A. Beattie, Chris D. Cantwell, Tammo Delhaas, Charles
Houston, Gustavo Montes Novaes, Alexander V. Panfilov, Pras Pathmanathan,
Marina Riabiz, Rodrigo Weber dos Santos, John Walmsley, Keith Worden, Gary R.
Mirams and Richard D. Wilkinson
- Abstract summary: Uncertainty quantification (UQ) is a vital step in using mathematical models and simulations to take decisions.
In this piece we draw attention to an important and under-addressed source of uncertainty in our predictions -- that of uncertainty in the model structure or the equations themselves.
- Score: 41.77362715012383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification (UQ) is a vital step in using mathematical models
and simulations to take decisions. The field of cardiac simulation has begun to
explore and adopt UQ methods to characterise uncertainty in model inputs and
how that propagates through to outputs or predictions. In this perspective
piece we draw attention to an important and under-addressed source of
uncertainty in our predictions -- that of uncertainty in the model structure or
the equations themselves. The difference between imperfect models and reality
is termed model discrepancy, and we are often uncertain as to the size and
consequences of this discrepancy. Here we provide two examples of the
consequences of discrepancy when calibrating models at the ion channel and
action potential scales. Furthermore, we attempt to account for this
discrepancy when calibrating and validating an ion channel model using
different methods, based on modelling the discrepancy using Gaussian processes
(GPs) and autoregressive-moving-average (ARMA) models, then highlight the
advantages and shortcomings of each approach. Finally, suggestions and lines of
enquiry for future work are provided.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Rigorous Assessment of Model Inference Accuracy using Language
Cardinality [5.584832154027001]
We develop a systematic approach that minimizes bias and uncertainty in model accuracy assessment by replacing statistical estimation with deterministic accuracy measures.
We experimentally demonstrate the consistency and applicability of our approach by assessing the accuracy of models inferred by state-of-the-art inference tools.
arXiv Detail & Related papers (2022-11-29T21:03:26Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Discrepancy Modeling Framework: Learning missing physics, modeling
systematic residuals, and disambiguating between deterministic and random
effects [4.459306403129608]
In modern dynamical systems, discrepancies between model and measurement can lead to poor quantification.
We introduce a discrepancy modeling framework to identify the missing physics and resolve the model-measurement mismatch.
arXiv Detail & Related papers (2022-03-10T05:37:24Z) - Uncertainty estimation under model misspecification in neural network
regression [3.2622301272834524]
We study the effect of the model choice on uncertainty estimation.
We highlight that under model misspecification, aleatoric uncertainty is not properly captured.
arXiv Detail & Related papers (2021-11-23T10:18:41Z) - Closed-form discovery of structural errors in models of chaotic systems
by integrating Bayesian sparse regression and data assimilation [0.0]
We introduce a framework named MEDIDA: Model Error Discovery with Interpretability and Data Assimilation.
In MEDIDA, first the model error is estimated from differences between the observed states and model-predicted states.
If observations are noisy, a data assimilation technique such as ensemble Kalman filter (EnKF) is first used to provide a noise-free analysis state of the system.
Finally, an equation-discovery technique, such as the relevance vector machine (RVM), a sparsity-promoting Bayesian method, is used to identify an interpretable, parsimonious, closed
arXiv Detail & Related papers (2021-10-01T17:19:28Z) - Quantifying Model Predictive Uncertainty with Perturbation Theory [21.591460685054546]
We propose a framework for predictive uncertainty quantification of a neural network.
We use perturbation theory from quantum physics to formulate a moment decomposition problem.
Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
arXiv Detail & Related papers (2021-09-22T17:55:09Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.