Conditions and Assumptions for Constraint-based Causal Structure
Learning
- URL: http://arxiv.org/abs/2103.13521v1
- Date: Wed, 24 Mar 2021 23:08:00 GMT
- Title: Conditions and Assumptions for Constraint-based Causal Structure
Learning
- Authors: Kayvan Sadeghi and Terry Soo
- Abstract summary: The paper formalizes constraint-based structure learning of the "true" causal graph from observed data.
We provide the theory for the general class of models under the assumption that the distribution is Markovian to the true causal graph.
- Score: 2.741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper formalizes constraint-based structure learning of the "true" causal
graph from observed data when unobserved variables are also existent. We define
a "generic" structure learning algorithm, which provides conditions that, under
the faithfulness assumption, the output of all known exact algorithms in the
literature must satisfy, and which outputs graphs that are Markov equivalent to
the causal graph. More importantly, we provide clear assumptions, weaker than
faithfulness, under which the same generic algorithm outputs Markov equivalent
graphs to the causal graph. We provide the theory for the general class of
models under the assumption that the distribution is Markovian to the true
causal graph, and we specialize the definitions and results for structural
causal models.
Related papers
- A General Framework for Constraint-based Causal Learning [3.031375888004876]
This provides a general framework to obtain correctness conditions for causal learning.
We show that the sparsest Markov representation condition is the weakest correctness condition resulting from existing notions of minimality for maximal ancestral graphs and directed acyclic graphs.
arXiv Detail & Related papers (2024-08-14T14:16:02Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Axiomatization of Interventional Probability Distributions [4.02487511510606]
Causal intervention is axiomatized under the rules of do-calculus.
We show that under our axiomatizations, the intervened distributions are Markovian to the defined intervened causal graphs.
We also show that a large class of natural structural causal models satisfy the theory presented here.
arXiv Detail & Related papers (2023-05-08T06:07:42Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Partial Disentanglement via Mechanism Sparsity [25.791043728989937]
Disentanglement via mechanism sparsity was introduced as a principled approach to extract latent factors without supervision.
We introduce a generalization of this theory which applies to any ground-truth graph.
We show how disentangled the learned representation is expected to be, via a new equivalence relation over models we call consistency.
arXiv Detail & Related papers (2022-07-15T20:06:12Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Learning non-Gaussian graphical models via Hessian scores and triangular
transport [6.308539010172309]
We propose an algorithm for learning the Markov structure of continuous and non-Gaussian distributions.
Our algorithm SING estimates the density using a deterministic coupling, induced by a triangular transport map, and iteratively exploits sparse structure in the map to reveal sparsity in the graph.
arXiv Detail & Related papers (2021-01-08T16:42:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.