Entropic Inequality Constraints from $e$-separation Relations in
Directed Acyclic Graphs with Hidden Variables
- URL: http://arxiv.org/abs/2107.07087v1
- Date: Thu, 15 Jul 2021 02:43:33 GMT
- Title: Entropic Inequality Constraints from $e$-separation Relations in
Directed Acyclic Graphs with Hidden Variables
- Authors: Noam Finkelstein, Beata Zjawin, Elie Wolfe, Ilya Shpitser, Robert W.
Spekkens
- Abstract summary: We show that capacity of variables along a causal pathway to convey information is restricted by their entropy.
We propose a measure of causal influence called the minimal mediary entropy, and demonstrate that it can augment traditional measures such as the average causal effect.
- Score: 8.242194776558895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Directed acyclic graphs (DAGs) with hidden variables are often used to
characterize causal relations between variables in a system. When some
variables are unobserved, DAGs imply a notoriously complicated set of
constraints on the distribution of observed variables. In this work, we present
entropic inequality constraints that are implied by $e$-separation relations in
hidden variable DAGs with discrete observed variables. The constraints can
intuitively be understood to follow from the fact that the capacity of
variables along a causal pathway to convey information is restricted by their
entropy; e.g. at the extreme case, a variable with entropy $0$ can convey no
information. We show how these constraints can be used to learn about the true
causal model from an observed data distribution. In addition, we propose a
measure of causal influence called the minimal mediary entropy, and demonstrate
that it can augment traditional measures such as the average causal effect.
Related papers
- Linear causal disentanglement via higher-order cumulants [0.0]
We study the identifiability of linear causal disentanglement, assuming access to data under multiple contexts.
We show that one perfect intervention on each latent variable is sufficient and in the worst case necessary to recover parameters under perfect interventions.
arXiv Detail & Related papers (2024-07-05T15:53:16Z) - Causal Representation Learning from Multiple Distributions: A General Setting [21.73088044465267]
This paper is concerned with a general, completely nonparametric setting of causal representation learning from multiple distributions.
We show that under the sparsity constraint on the recovered graph over the latent variables and suitable sufficient change conditions on the causal influences, one can recover the moralized graph of the underlying directed acyclic graph.
In some cases, most latent variables can even be recovered up to component-wise transformations.
arXiv Detail & Related papers (2024-02-07T17:51:38Z) - Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Reinterpreting causal discovery as the task of predicting unobserved
joint statistics [15.088547731564782]
We argue that causal discovery can help inferring properties of the unobserved joint distributions'
We define a learning scenario where the input is a subset of variables and the label is some statistical property of that subset.
arXiv Detail & Related papers (2023-05-11T15:30:54Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Discovery of Causal Additive Models in the Presence of Unobserved
Variables [6.670414650224422]
Causal discovery from data affected by unobserved variables is an important but difficult problem to solve.
We propose a method to identify all the causal relationships that are theoretically possible to identify without being biased by unobserved variables.
arXiv Detail & Related papers (2021-06-04T03:28:27Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Entropic Causal Inference: Identifiability and Finite Sample Results [14.495984877053948]
Entropic causal inference is a framework for inferring the causal direction between two categorical variables from observational data.
We consider the minimum entropy coupling-based algorithmic approach presented by Kocaoglu et al.
arXiv Detail & Related papers (2021-01-10T08:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.