On the Use of Generative Models in Observational Causal Analysis
- URL: http://arxiv.org/abs/2306.04792v1
- Date: Wed, 7 Jun 2023 21:29:49 GMT
- Title: On the Use of Generative Models in Observational Causal Analysis
- Authors: Nimrod Megiddo
- Abstract summary: The use of a hypothetical generative model was been suggested for causal analysis of observational data.
The model describes a single observable distribution and cannot a chain of effects of intervention that deviates from the observed distribution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of a hypothetical generative model was been suggested for causal
analysis of observational data. The very assumption of a particular model is a
commitment to a certain set of variables and therefore to a certain set of
possible causes. Estimating the joint probability distribution of can be useful
for predicting values of variables in view of the observed values of others,
but it is not sufficient for inferring causal relationships. The model
describes a single observable distribution and cannot a chain of effects of
intervention that deviate from the observed distribution.
Related papers
- Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Invariance & Causal Representation Learning: Prospects and Limitations [15.935205681539145]
In causal models, a given mechanism is assumed to be invariant to changes of other mechanisms.
We show that invariance alone is insufficient to identify latent causal variables.
arXiv Detail & Related papers (2023-12-06T16:16:31Z) - Model-agnostic variable importance for predictive uncertainty: an entropy-based approach [1.912429179274357]
We show how existing methods in explainability can be extended to uncertainty-aware models.
We demonstrate the utility of these approaches to understand both the sources of uncertainty and their impact on model performance.
arXiv Detail & Related papers (2023-10-19T15:51:23Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Restricted Hidden Cardinality Constraints in Causal Models [0.0]
Causal models with unobserved variables impose nontrivial constraints on the distributions over the observed variables.
We consider causal models with a promise that unobserved variables have known cardinalities.
arXiv Detail & Related papers (2021-09-13T00:52:08Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.