Approximate Causal Effect Identification under Weak Confounding
- URL: http://arxiv.org/abs/2306.13242v1
- Date: Thu, 22 Jun 2023 23:35:49 GMT
- Title: Approximate Causal Effect Identification under Weak Confounding
- Authors: Ziwei Jiang, Lai Wei and Murat Kocaoglu
- Abstract summary: We propose an efficient linear program to derive the upper and lower bounds of the causal effect.
We show that our bounds are consistent in the sense that as the entropy of unobserved confounders goes to zero, the gap between the upper and lower bound vanishes.
- Score: 13.552959043816482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal effect estimation has been studied by many researchers when only
observational data is available. Sound and complete algorithms have been
developed for pointwise estimation of identifiable causal queries. For
non-identifiable causal queries, researchers developed polynomial programs to
estimate tight bounds on causal effect. However, these are computationally
difficult to optimize for variables with large support sizes. In this paper, we
analyze the effect of "weak confounding" on causal estimands. More
specifically, under the assumption that the unobserved confounders that render
a query non-identifiable have small entropy, we propose an efficient linear
program to derive the upper and lower bounds of the causal effect. We show that
our bounds are consistent in the sense that as the entropy of unobserved
confounders goes to zero, the gap between the upper and lower bound vanishes.
Finally, we conduct synthetic and real data simulations to compare our bounds
with the bounds obtained by the existing work that cannot incorporate such
entropy constraints and show that our bounds are tighter for the setting with
weak confounders.
Related papers
- Causal Discovery of Linear Non-Gaussian Causal Models with Unobserved Confounding [1.6932009464531739]
We consider linear non-Gaussian structural equation models that involve latent confounding.
In this setting, the causal structure is identifiable, but, in general, it is not possible to identify the specific causal effects.
arXiv Detail & Related papers (2024-08-09T07:24:12Z) - Scalable Computation of Causal Bounds [11.193504036335503]
We consider the problem of computing bounds for causal queries on causal graphs with unobserved confounders and discrete valued observed variables.
Existing non-studied approaches for computing such bounds use linear programming (LP) formulations that quickly become intractable for existing solvers.
We show that this LP can be significantly pruned, allowing us to compute bounds for significantly larger causal inference problems compared to existing techniques.
arXiv Detail & Related papers (2023-08-04T21:00:46Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Causal discovery under a confounder blanket [9.196779204457059]
Inferring causal relationships from observational data is rarely straightforward, but the problem is especially difficult in high dimensions.
We relax these assumptions and focus on an important but more specialized problem, namely recovering a directed acyclic subgraph.
We derive a complete algorithm for identifying causal relationships under these conditions and implement testing procedures.
arXiv Detail & Related papers (2022-05-11T18:10:45Z) - On the Role of Entropy-based Loss for Learning Causal Structures with
Continuous Optimization [27.613220411996025]
A method with non-combinatorial directed acyclic constraint, called NOTEARS, formulates the causal structure learning problem as a continuous optimization problem using least-square loss.
We show that the violation of the Gaussian noise assumption will hinder the causal direction identification.
We propose a more general entropy-based loss that is theoretically consistent with the likelihood score under any noise distribution.
arXiv Detail & Related papers (2021-06-05T08:29:51Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.