Causal Falsification of Digital Twins
- URL: http://arxiv.org/abs/2301.07210v4
- Date: Thu, 2 Nov 2023 11:18:20 GMT
- Title: Causal Falsification of Digital Twins
- Authors: Rob Cornish, Muhammad Faaiz Taufiq, Arnaud Doucet, Chris Holmes
- Abstract summary: Digital twins are virtual systems designed to predict how a real-world process will evolve in response to interventions.
We consider how to assess the accuracy of a digital twin using real-world data.
- Score: 33.567972948107005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital twins are virtual systems designed to predict how a real-world
process will evolve in response to interventions. This modelling paradigm holds
substantial promise in many applications, but rigorous procedures for assessing
their accuracy are essential for safety-critical settings. We consider how to
assess the accuracy of a digital twin using real-world data. We formulate this
as causal inference problem, which leads to a precise definition of what it
means for a twin to be "correct" appropriate for many applications.
Unfortunately, fundamental results from causal inference mean observational
data cannot be used to certify that a twin is correct in this sense unless
potentially tenuous assumptions are made, such as that the data are
unconfounded. To avoid these assumptions, we propose instead to find situations
in which the twin is not correct, and present a general-purpose statistical
procedure for doing so. Our approach yields reliable and actionable information
about the twin under only the assumption of an i.i.d. dataset of observational
trajectories, and remains sound even if the data are confounded. We apply our
methodology to a large-scale, real-world case study involving sepsis modelling
within the Pulse Physiology Engine, which we assess using the MIMIC-III dataset
of ICU patients.
Related papers
- Estimating Uncertainty with Implicit Quantile Network [0.0]
Uncertainty quantification is an important part of many performance critical applications.
This paper provides a simple alternative to existing approaches such as ensemble learning and bayesian neural networks.
arXiv Detail & Related papers (2024-08-26T13:33:14Z) - SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - DAGnosis: Localized Identification of Data Inconsistencies using
Structures [73.39285449012255]
Identification and appropriate handling of inconsistencies in data at deployment time is crucial to reliably use machine learning models.
We use directed acyclic graphs (DAGs) to encode the training set's features probability distribution and independencies as a structure.
Our method, called DAGnosis, leverages these structural interactions to bring valuable and insightful data-centric conclusions.
arXiv Detail & Related papers (2024-02-26T11:29:16Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Bayesian Networks for the robust and unbiased prediction of depression
and its symptoms utilizing speech and multimodal data [65.28160163774274]
We apply a Bayesian framework to capture the relationships between depression, depression symptoms, and features derived from speech, facial expression and cognitive game data collected at thymia.
arXiv Detail & Related papers (2022-11-09T14:48:13Z) - Evaluating Causal Inference Methods [0.4588028371034407]
We introduce a deep generative model-based framework, Credence, to validate causal inference methods.
Our work introduces a deep generative model-based framework, Credence, to validate causal inference methods.
arXiv Detail & Related papers (2022-02-09T00:21:22Z) - Uncertainty-aware GAN with Adaptive Loss for Robust MRI Image
Enhancement [3.222802562733787]
Conditional generative adversarial networks (GANs) have shown improved performance in learning photo-realistic image-to-image mappings.
This paper proposes a GAN-based framework that (i)models an adaptive loss function for robustness to OOD-noisy data and (ii)estimates the per-voxel uncertainty in the predictions.
We demonstrate our method on two key applications in medical imaging: (i)undersampled magnetic resonance imaging (MRI) reconstruction (ii)MRI modality propagation.
arXiv Detail & Related papers (2021-10-07T11:29:03Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent
Variable Models [14.173184309520453]
State-of-the-art methods for causal inference don't consider missing values.
Missing data require an adapted unconfoundedness hypothesis.
Latent confounders whose distribution is learned through variational autoencoders adapted to missing values are considered.
arXiv Detail & Related papers (2020-02-25T12:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.