On the Role of Priors in Bayesian Causal Learning
- URL: http://arxiv.org/abs/2504.01424v1
- Date: Wed, 02 Apr 2025 07:19:49 GMT
- Title: On the Role of Priors in Bayesian Causal Learning
- Authors: Bernhard C. Geiger, Roman Kern,
- Abstract summary: We show in a didactically accessible manner that unlabeled data do not improve the estimation of the parameters defining the mechanism.<n>We observe the importance of choosing an appropriate prior for the cause and mechanism parameters, respectively.
- Score: 12.319546463021654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we investigate causal learning of independent causal mechanisms from a Bayesian perspective. Confirming previous claims from the literature, we show in a didactically accessible manner that unlabeled data (i.e., cause realizations) do not improve the estimation of the parameters defining the mechanism. Furthermore, we observe the importance of choosing an appropriate prior for the cause and mechanism parameters, respectively. Specifically, we show that a factorized prior results in a factorized posterior, which resonates with Janzing and Sch\"olkopf's definition of independent causal mechanisms via the Kolmogorov complexity of the involved distributions and with the concept of parameter independence of Heckerman et al.
Related papers
- Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms [17.074858228123706]
We propose a framework for learning causally disentangled representations supervised by causally related observed labels.
We show that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.
arXiv Detail & Related papers (2023-06-02T00:28:48Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal de Finetti: On the Identification of Invariant Causal Structure in Exchangeable Data [45.389985793060674]
Constraint-based causal discovery methods leverage conditional independence tests to infer causal relationships in a wide variety of applications.
We show that exchangeable data contains richer conditional independence structure than i.i.d.$ $ data, and show how the richer structure can be leveraged for causal discovery.
arXiv Detail & Related papers (2022-03-29T17:10:39Z) - Learning Generalized Gumbel-max Causal Mechanisms [31.64007831043909]
We argue for choosing a causal mechanism that is best under a quantitative criteria such as minimizing variance when estimating counterfactual treatment effects.
We show that they can be trained to minimize counterfactual effect variance and other losses on a distribution of queries of interest.
arXiv Detail & Related papers (2021-11-11T22:02:20Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Independent mechanism analysis, a new concept? [3.2548794659022393]
Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process.
We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
arXiv Detail & Related papers (2021-06-09T16:45:00Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Latent Instrumental Variables as Priors in Causal Inference based on
Independence of Cause and Mechanism [2.28438857884398]
We study the role of latent variables such as latent instrumental variables and hidden common causes in the causal graphical structures.
We derive a novel algorithm to infer causal relationships between two variables.
arXiv Detail & Related papers (2020-07-17T08:18:19Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.