Lifted Causal Inference in Relational Domains
- URL: http://arxiv.org/abs/2403.10184v1
- Date: Fri, 15 Mar 2024 10:44:27 GMT
- Title: Lifted Causal Inference in Relational Domains
- Authors: Malte Luttermann, Mattis Hartwig, Tanya Braun, Ralf Möller, Marcel Gehrke,
- Abstract summary: We show how lifting can be applied to efficiently compute causal effects in relational domains.
We present the lifted causal inference algorithm to compute causal effects on a lifted level.
- Score: 5.170468311431656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lifted inference exploits symmetries in probabilistic graphical models by using a representative for indistinguishable objects, thereby speeding up query answering while maintaining exact answers. Even though lifting is a well-established technique for the task of probabilistic inference in relational domains, it has not yet been applied to the task of causal inference. In this paper, we show how lifting can be applied to efficiently compute causal effects in relational domains. More specifically, we introduce parametric causal factor graphs as an extension of parametric factor graphs incorporating causal knowledge and give a formal semantics of interventions therein. We further present the lifted causal inference algorithm to compute causal effects on a lifted level, thereby drastically speeding up causal inference compared to propositional inference, e.g., in causal Bayesian networks. In our empirical evaluation, we demonstrate the effectiveness of our approach.
Related papers
- Estimating Causal Effects in Partially Directed Parametric Causal Factor Graphs [4.647149336191891]
We show how lifting can be applied to causal inference in partially directed graphs.
We show how causal inference can be performed on a lifted level in partially directed causal factor graphs.
arXiv Detail & Related papers (2024-11-11T14:05:39Z) - An Overview of Causal Inference using Kernel Embeddings [14.298666697532838]
Kernel embeddings have emerged as a powerful tool for representing probability measures in a variety of statistical inference problems.
Main challenges include identifying causal associations and estimating the average treatment effect from observational data.
arXiv Detail & Related papers (2024-10-30T07:23:34Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Bayesian Causal Inference with Gaussian Process Networks [1.7188280334580197]
We consider the problem of the Bayesian estimation of the effects of hypothetical interventions in the Gaussian Process Network model.
We detail how to perform causal inference on GPNs by simulating the effect of an intervention across the whole network and propagating the effect of the intervention on downstream variables.
We extend both frameworks beyond the case of a known causal graph, incorporating uncertainty about the causal structure via Markov chain Monte Carlo methods.
arXiv Detail & Related papers (2024-02-01T14:39:59Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Causal Inference Using Tractable Circuits [11.358487655918676]
We show that probabilistic inference in the presence of unknown causal mechanisms can be tractable for models that have traditionally been viewed as intractable.
This has been enabled by a new technique that can exploit causal mechanisms computationally but without needing to know their identities.
Our goal is to provide a causality-oriented exposure to these new results and to speculate on how they may potentially contribute to more scalable and versatile causal inference.
arXiv Detail & Related papers (2022-02-07T00:09:39Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.