Causes of Effects: Learning individual responses from population data
- URL: http://arxiv.org/abs/2104.13730v1
- Date: Wed, 28 Apr 2021 12:38:11 GMT
- Title: Causes of Effects: Learning individual responses from population data
- Authors: Scott Mueller, Ang Li, Judea Pearl
- Abstract summary: We study the problem of individualization and its applications in medicine.
For example, the probability of benefiting from a treatment concerns an individual having a favorable outcome if treated and an unfavorable outcome if untreated.
We analyze and expand on existing research by applying bounds to the probability of necessity and sufficiency (PNS) along with graphical criteria and practical applications.
- Score: 23.593582720307207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of individualization is recognized as crucial in almost every
field. Identifying causes of effects in specific events is likewise essential
for accurate decision making. However, such estimates invoke counterfactual
relationships, and are therefore indeterminable from population data. For
example, the probability of benefiting from a treatment concerns an individual
having a favorable outcome if treated and an unfavorable outcome if untreated.
Experiments conditioning on fine-grained features are fundamentally inadequate
because we can't test both possibilities for an individual. Tian and Pearl
provided bounds on this and other probabilities of causation using a
combination of experimental and observational data. Even though those bounds
were proven tight, narrower bounds, sometimes significantly so, can be achieved
when structural information is available in the form of a causal model. This
has the power to solve central problems, such as explainable AI, legal
responsibility, and personalized medicine, all of which demand counterfactual
logic. We analyze and expand on existing research by applying bounds to the
probability of necessity and sufficiency (PNS) along with graphical criteria
and practical applications.
Related papers
- On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Intervention and Conditioning in Causal Bayesian Networks [23.225006087292765]
We show that by making simple yet often realistic independence assumptions, it is possible to estimate the probability of an interventional formula.
In many cases of interest, when the assumptions are appropriate, these probability estimates can be evaluated using observational data.
arXiv Detail & Related papers (2024-05-23T15:55:38Z) - Identification of Single-Treatment Effects in Factorial Experiments [0.0]
I show that when multiple interventions are randomized in experiments, the effect any single intervention would have outside the experimental setting is not identified absent heroic assumptions.
observational studies and factorial experiments provide information about potential-outcome distributions with zero and multiple interventions.
I show that researchers who rely on this type of design have to justify either linearity of functional forms or specify with Directed Acyclic Graphs how variables are related in the real world.
arXiv Detail & Related papers (2024-05-16T04:01:53Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Interventional Causal Representation Learning [75.18055152115586]
Causal representation learning seeks to extract high-level latent factors from low-level sensory data.
Can interventional data facilitate causal representation learning?
We show that interventional data often carries geometric signatures of the latent factors' support.
arXiv Detail & Related papers (2022-09-24T04:59:03Z) - Probabilities of Causation with Nonbinary Treatment and Effect [20.750773939911685]
Tian and Pearl derived sharp bounds for the probability of necessity and sufficiency.
We provide theoretical bounds for all types of probabilities of causation to multivalued treatments and effects.
arXiv Detail & Related papers (2022-08-19T23:54:47Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Typing assumptions improve identification in causal discovery [123.06886784834471]
Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified.
We propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables.
arXiv Detail & Related papers (2021-07-22T14:23:08Z) - An introduction to causal reasoning in health analytics [2.199093822766999]
We will try to highlight some of the drawbacks that may arise in traditional machine learning and statistical approaches to analyze the observational data.
We will demonstrate the applications of causal inference in tackling some common machine learning issues.
arXiv Detail & Related papers (2021-05-10T20:25:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.