Learning Relational Causal Models with Cycles through Relational
Acyclification
- URL: http://arxiv.org/abs/2208.12210v2
- Date: Fri, 26 Aug 2022 15:54:07 GMT
- Title: Learning Relational Causal Models with Cycles through Relational
Acyclification
- Authors: Ragib Ahsan, David Arbour, Elena Zheleva
- Abstract summary: We introduce textitrelational acyclification, an operation specifically designed for relational models.
We show that under the assumptions of relational acyclification and $sigma$-faithfulness, the relational causal discovery algorithm RCD is sound and complete for cyclic models.
- Score: 16.10327013845982
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In real-world phenomena which involve mutual influence or causal effects
between interconnected units, equilibrium states are typically represented with
cycles in graphical models. An expressive class of graphical models,
\textit{relational causal models}, can represent and reason about complex
dynamic systems exhibiting such cycles or feedback loops. Existing cyclic
causal discovery algorithms for learning causal models from observational data
assume that the data instances are independent and identically distributed
which makes them unsuitable for relational causal models. At the same time,
causal discovery algorithms for relational causal models assume acyclicity. In
this work, we examine the necessary and sufficient conditions under which a
constraint-based relational causal discovery algorithm is sound and complete
for \textit{cyclic relational causal models}. We introduce \textit{relational
acyclification}, an operation specifically designed for relational models that
enables reasoning about the identifiability of cyclic relational causal models.
We show that under the assumptions of relational acyclification and
$\sigma$-faithfulness, the relational causal discovery algorithm RCD (Maier et
al. 2013) is sound and complete for cyclic models. We present experimental
results to support our claim.
Related papers
- Targeted Reduction of Causal Models [55.11778726095353]
Causal Representation Learning offers a promising avenue to uncover interpretable causal patterns in simulations.
We introduce Targeted Causal Reduction (TCR), a method for condensing complex intervenable models into a concise set of causal factors.
Its ability to generate interpretable high-level explanations from complex models is demonstrated on toy and mechanical systems.
arXiv Detail & Related papers (2023-11-30T15:46:22Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - NODAGS-Flow: Nonlinear Cyclic Causal Structure Learning [8.20217860574125]
We propose a novel framework for learning nonlinear cyclic causal models from interventional data, called NODAGS-Flow.
We show significant performance improvements with our approach compared to state-of-the-art methods with respect to structure recovery and predictive performance.
arXiv Detail & Related papers (2023-01-04T23:28:18Z) - Relational Causal Models with Cycles:Representation and Reasoning [16.10327013845982]
We introduce relational $sigma$-separation, a new criterion for understanding relational systems with feedback loops.
We show the necessary and sufficient conditions for the completeness of $sigma$-AGG and that relational $sigma$-separation is sound and complete in the presence of one or more cycles with arbitrary length.
arXiv Detail & Related papers (2022-02-22T07:37:17Z) - A general framework for cyclic and fine-tuned causal models and their
compatibility with space-time [2.0305676256390934]
Causal modelling is a tool for generating causal explanations of observed correlations.
Existing frameworks for quantum causality tend to focus on acyclic causal structures that are not fine-tuned.
Cyclist causal models can be used to model physical processes involving feedback.
Cyclist causal models may also be relevant in exotic solutions of general relativity.
arXiv Detail & Related papers (2021-09-24T18:00:08Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Causal Inference with Deep Causal Graphs [0.0]
Parametric causal modelling techniques rarely provide functionality for counterfactual estimation.
Deep Causal Graphs is an abstract specification of the required functionality for a neural network to model causal distributions.
We demonstrate its expressive power in modelling complex interactions and showcase applications to machine learning explainability and fairness.
arXiv Detail & Related papers (2020-06-15T13:03:33Z) - Structure Learning for Cyclic Linear Causal Models [5.567377163246147]
We consider the problem of structure learning for linear causal models based on observational data.
We treat models given by possibly cyclic mixed graphs, which allow for feedback loops and effects of latent confounders.
arXiv Detail & Related papers (2020-06-10T17:47:28Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.