A continuous Structural Intervention Distance to compare Causal Graphs
- URL: http://arxiv.org/abs/2307.16452v1
- Date: Mon, 31 Jul 2023 07:20:26 GMT
- Title: A continuous Structural Intervention Distance to compare Causal Graphs
- Authors: Mihir Dhanakshirur, Felix Laumann, Junhyung Park, Mauricio Barahona
- Abstract summary: The distance is based on embedding intervention distributions over each pair of nodes.
We show theoretical results which we validate with numerical experiments on synthetic data.
- Score: 5.477914707166288
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding and adequately assessing the difference between a true and a
learnt causal graphs is crucial for causal inference under interventions. As an
extension to the graph-based structural Hamming distance and structural
intervention distance, we propose a novel continuous-measured metric that
considers the underlying data in addition to the graph structure for its
calculation of the difference between a true and a learnt causal graph. The
distance is based on embedding intervention distributions over each pair of
nodes as conditional mean embeddings into reproducing kernel Hilbert spaces and
estimating their difference by the maximum (conditional) mean discrepancy. We
show theoretical results which we validate with numerical experiments on
synthetic data.
Related papers
- Sample Efficient Bayesian Learning of Causal Graphs from Interventions [6.823521786512908]
This study considers a Bayesian approach for learning causal graphs with limited interventional samples.
We show theoretically that our proposed algorithm will return the true causal graph with high probability.
We present a case study showing how this algorithm could be modified to answer more general causal questions without learning the whole graph.
arXiv Detail & Related papers (2024-10-26T05:47:56Z) - Separation-based distance measures for causal graphs [15.37737222790121]
State-of-the-art causal discovery methods do not output single causal graphs, but rather their equivalence classes (MECs)
We propose additional measures of distance that capture the difference in separations distances are not fit to assess.
We complement our theoretical analysis with toy examples and empirical experiments that highlight the differences to existing comparison metrics.
arXiv Detail & Related papers (2024-02-07T15:36:53Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Front-door Adjustment Beyond Markov Equivalence with Limited Graph
Knowledge [36.210656212459554]
We provide testable conditional independence statements to compute the causal effect using front-door-like adjustment.
We show that our method is applicable in scenarios where knowing the Markov equivalence class is not sufficient for causal effect estimation.
arXiv Detail & Related papers (2023-06-19T15:16:56Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Projections of Model Spaces for Latent Graph Inference [18.219577154655006]
Graph Neural Networks leverage the connectivity structure of graphs as an inductive bias.
Latent graph inference focuses on learning an adequate graph structure to diffuse information on and improve the downstream performance of the model.
arXiv Detail & Related papers (2023-03-21T11:20:22Z) - Beyond spectral gap (extended): The role of the topology in
decentralized learning [58.48291921602417]
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model.
Current theory does not explain that collaboration enables larger learning rates than training alone.
This paper aims to paint an accurate picture of sparsely-connected distributed optimization.
arXiv Detail & Related papers (2023-01-05T16:53:38Z) - Beyond spectral gap: The role of the topology in decentralized learning [58.48291921602417]
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model.
This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution.
Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.
arXiv Detail & Related papers (2022-06-07T08:19:06Z) - Learning to Induce Causal Structure [29.810917060087117]
We propose a neural network architecture that learns the mapping from both observational and interventional data to graph structures via supervised training on synthetic graphs.
We show that the proposed model generalizes not only to new synthetic graphs but also to naturalistic graphs.
arXiv Detail & Related papers (2022-04-11T05:38:22Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.