Structural Causal Models Are (Solvable by) Credal Networks
- URL: http://arxiv.org/abs/2008.00463v1
- Date: Sun, 2 Aug 2020 11:19:36 GMT
- Title: Structural Causal Models Are (Solvable by) Credal Networks
- Authors: Marco Zaffalon and Alessandro Antonucci and Rafael Caba\~nas
- Abstract summary: Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
- Score: 70.45873402967297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A structural causal model is made of endogenous (manifest) and exogenous
(latent) variables. We show that endogenous observations induce linear
constraints on the probabilities of the exogenous variables. This allows to
exactly map a causal model into a credal network. Causal inferences, such as
interventions and counterfactuals, can consequently be obtained by standard
algorithms for the updating of credal nets. These natively return sharp values
in the identifiable case, while intervals corresponding to the exact bounds are
produced for unidentifiable queries. A characterization of the causal models
that allow the map above to be compactly derived is given, along with a
discussion about the scalability for general models. This contribution should
be regarded as a systematic approach to represent structural causal models by
credal networks and hence to systematically compute causal inferences. A number
of demonstrative examples is presented to clarify our methodology. Extensive
experiments show that approximate algorithms for credal networks can
immediately be used to do causal inference in real-size problems.
Related papers
- Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Consistency of Neural Causal Partial Identification [17.503562318576414]
Recent progress in Causal Models showcased how identification and partial identification of causal effects can be automatically carried out via neural generative models.
We prove consistency of partial identification via NCMs in a general setting with both continuous and categorical variables.
Results highlight the impact of the design of the underlying neural network architecture in terms of depth and connectivity.
arXiv Detail & Related papers (2024-05-24T16:12:39Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - General Identifiability and Achievability for Causal Representation
Learning [33.80247458590611]
The paper establishes identifiability and achievability results using two hard uncoupled interventions per node in the latent causal graph.
For identifiability, the paper establishes that perfect recovery of the latent causal model and variables is guaranteed under uncoupled interventions.
The analysis, additionally, recovers the identifiability result for two hard coupled interventions, that is when metadata about the pair of environments that have the same node intervened is known.
arXiv Detail & Related papers (2023-10-24T01:47:44Z) - Causal Analysis for Robust Interpretability of Neural Networks [0.2519906683279152]
We develop a robust interventional-based method to capture cause-effect mechanisms in pre-trained neural networks.
We apply our method to vision models trained on classification tasks.
arXiv Detail & Related papers (2023-05-15T18:37:24Z) - Representation Disentaglement via Regularization by Causal
Identification [3.9160947065896803]
We propose the use of a causal collider structured model to describe the underlying data generative process assumptions in disentangled representation learning.
For this, we propose regularization by identification (ReI), a modular regularization engine designed to align the behavior of large scale generative models with the disentanglement constraints imposed by causal identification.
arXiv Detail & Related papers (2023-02-28T23:18:54Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.