Identifying Patient-Specific Root Causes with the Heteroscedastic Noise
Model
- URL: http://arxiv.org/abs/2205.13085v2
- Date: Thu, 6 Jul 2023 20:22:20 GMT
- Title: Identifying Patient-Specific Root Causes with the Heteroscedastic Noise
Model
- Authors: Eric V. Strobl, Thomas A. Lasko
- Abstract summary: We focus on identifying patient-specific root causes of disease, which we equate to the sample-specific predictivity of the error terms in a structural equation model.
A customized algorithm called Generalized Root Causal Inference (GRCI) is used to extract the error terms correctly.
- Score: 10.885111578191564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex diseases are caused by a multitude of factors that may differ between
patients even within the same diagnostic category. A few underlying root causes
may nevertheless initiate the development of disease within each patient. We
therefore focus on identifying patient-specific root causes of disease, which
we equate to the sample-specific predictivity of the exogenous error terms in a
structural equation model. We generalize from the linear setting to the
heteroscedastic noise model where $Y = m(X) + \varepsilon\sigma(X)$ with
non-linear functions $m(X)$ and $\sigma(X)$ representing the conditional mean
and mean absolute deviation, respectively. This model preserves identifiability
but introduces non-trivial challenges that require a customized algorithm
called Generalized Root Causal Inference (GRCI) to extract the error terms
correctly. GRCI recovers patient-specific root causes more accurately than
existing alternatives.
Related papers
- Quantifying the Sensitivity of Inverse Reinforcement Learning to
Misspecification [72.08225446179783]
Inverse reinforcement learning aims to infer an agent's preferences from their behaviour.
To do this, we need a behavioural model of how $pi$ relates to $R$.
We analyse how sensitive the IRL problem is to misspecification of the behavioural model.
arXiv Detail & Related papers (2024-03-11T16:09:39Z) - Towards frugal unsupervised detection of subtle abnormalities in medical
imaging [0.0]
Anomaly detection in medical imaging is a challenging task in contexts where abnormalities are not annotated.
We investigate mixtures of probability distributions whose versatility has been widely recognized.
This online approach is illustrated on the challenging detection of subtle abnormalities in MR brain scans for the follow-up of newly diagnosed Parkinsonian patients.
arXiv Detail & Related papers (2023-09-04T07:44:54Z) - Diagnosis Uncertain Models For Medical Risk Prediction [80.07192791931533]
We consider a patient risk model which has access to vital signs, lab values, and prior history but does not have access to a patient's diagnosis.
We show that such all-cause' risk models have good generalization across diagnoses but have a predictable failure mode.
We propose a fix for this problem by explicitly modeling the uncertainty in risk prediction coming from uncertainty in patient diagnoses.
arXiv Detail & Related papers (2023-06-29T23:36:04Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Counterfactual Formulation of Patient-Specific Root Causes of Disease [7.6146285961466]
Root causes of disease intuitively correspond to root vertices that increase the likelihood of a diagnosis.
Prior work defined patient-specific root causes of disease using an interventionalist account that only climbs to the second rung of Pearl's Ladder of Causation.
We propose a counterfactual definition matching clinical intuition based on fixed factual data alone.
arXiv Detail & Related papers (2023-05-27T20:24:27Z) - Sample-Specific Root Causal Inference with Latent Variables [10.885111578191564]
Root causal analysis seeks to identify the set of initial perturbations that induce an unwanted outcome.
We rigorously quantified predictivity using Shapley values.
We introduce a corresponding procedure called Extract Errors with Latents (EEL) for recovering the error terms up to contamination.
arXiv Detail & Related papers (2022-10-27T11:33:26Z) - On the Identifiability and Estimation of Causal Location-Scale Noise
Models [122.65417012597754]
We study the class of location-scale or heteroscedastic noise models (LSNMs)
We show the causal direction is identifiable up to some pathological cases.
We propose two estimators for LSNMs: an estimator based on (non-linear) feature maps, and one based on neural networks.
arXiv Detail & Related papers (2022-10-13T17:18:59Z) - Identifying Patient-Specific Root Causes of Disease [10.885111578191564]
Complex diseases are caused by a multitude of factors that may differ between patients.
A few highly predictive root causes may nevertheless generate disease within each patient.
arXiv Detail & Related papers (2022-05-23T20:54:24Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.