Axiomatization of Interventional Probability Distributions
- URL: http://arxiv.org/abs/2305.04479v2
- Date: Tue, 14 Nov 2023 00:38:43 GMT
- Title: Axiomatization of Interventional Probability Distributions
- Authors: Kayvan Sadeghi and Terry Soo
- Abstract summary: Causal intervention is axiomatized under the rules of do-calculus.
We show that under our axiomatizations, the intervened distributions are Markovian to the defined intervened causal graphs.
We also show that a large class of natural structural causal models satisfy the theory presented here.
- Score: 4.02487511510606
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Causal intervention is an essential tool in causal inference. It is
axiomatized under the rules of do-calculus in the case of structure causal
models. We provide simple axiomatizations for families of probability
distributions to be different types of interventional distributions. Our
axiomatizations neatly lead to a simple and clear theory of causality that has
several advantages: it does not need to make use of any modeling assumptions
such as those imposed by structural causal models; it only relies on
interventions on single variables; it includes most cases with latent variables
and causal cycles; and more importantly, it does not assume the existence of an
underlying true causal graph as we do not take it as the primitive object--in
fact, a causal graph is derived as a by-product of our theory. We show that,
under our axiomatizations, the intervened distributions are Markovian to the
defined intervened causal graphs, and an observed joint probability
distribution is Markovian to the obtained causal graph; these results are
consistent with the case of structural causal models, and as a result, the
existing theory of causal inference applies. We also show that a large class of
natural structural causal models satisfy the theory presented here. We note
that the aim of this paper is axiomatization of interventional families, which
is subtly different from "causal modeling."
Related papers
- Causal modelling without introducing counterfactuals or abstract distributions [7.09435109588801]
In this paper, we construe causal inference as treatment-wise predictions for finite populations where all assumptions are testable.
The new framework highlights the model-dependence of causal claims as well as the difference between statistical and scientific inference.
arXiv Detail & Related papers (2024-07-24T16:07:57Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal models in string diagrams [0.0]
The framework of causal models provides a principled approach to causal reasoning, applied today across many scientific domains.
We present this framework in the language of string diagrams, interpreted formally using category theory.
We argue and demonstrate that causal reasoning according to the causal model framework is most naturally and intuitively done as diagrammatic reasoning.
arXiv Detail & Related papers (2023-04-15T21:54:48Z) - Phenomenological Causality [14.817342045377842]
We propose a notion of 'phenomenological causality' whose basic concept is a set of elementary actions.
We argue that it is consistent with the causal Markov condition when the system under consideration interacts with other variables that control the elementary actions.
arXiv Detail & Related papers (2022-11-15T13:05:45Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Markov categories, causal theories, and the do-calculus [7.061298918159947]
We give a category-theoretic treatment of causal models that formalizes the syntax for causal reasoning over a directed acyclic graph (DAG)
This framework enables us to define and study important concepts in causal reasoning from an abstract and "purely causal" point of view.
arXiv Detail & Related papers (2022-04-11T01:27:41Z) - Causality Inspired Representation Learning for Domain Generalization [47.574964496891404]
We introduce a general structural causal model to formalize the Domain generalization problem.
Our goal is to extract the causal factors from inputs and then reconstruct the invariant causal mechanisms.
We highlight that ideal causal factors should meet three basic properties: separated from the non-causal ones, jointly independent, and causally sufficient for the classification.
arXiv Detail & Related papers (2022-03-27T08:08:33Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.