Causal Lifting and Link Prediction
- URL: http://arxiv.org/abs/2302.01198v2
- Date: Thu, 27 Jul 2023 16:11:42 GMT
- Title: Causal Lifting and Link Prediction
- Authors: Leonardo Cotta, Beatrice Bevilacqua, Nesreen Ahmed, Bruno Ribeiro
- Abstract summary: We develop the first causal model capable of dealing with path dependencies in link prediction.
We show how structural pairwise embeddings exhibit lower bias and correctly represent the task's causal structure.
We validate our theoretical findings on three scenarios for causal link prediction tasks.
- Score: 10.336445584242933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing causal models for link prediction assume an underlying set of
inherent node factors -- an innate characteristic defined at the node's birth
-- that governs the causal evolution of links in the graph. In some causal
tasks, however, link formation is path-dependent: The outcome of link
interventions depends on existing links. Unfortunately, these existing causal
methods are not designed for path-dependent link formation, as the cascading
functional dependencies between links (arising from path dependence) are either
unidentifiable or require an impractical number of control variables. To
overcome this, we develop the first causal model capable of dealing with path
dependencies in link prediction. In this work we introduce the concept of
causal lifting, an invariance in causal models of independent interest that, on
graphs, allows the identification of causal link prediction queries using
limited interventional data. Further, we show how structural pairwise
embeddings exhibit lower bias and correctly represent the task's causal
structure, as opposed to existing node embeddings, e.g., graph neural network
node embeddings and matrix factorization. Finally, we validate our theoretical
findings on three scenarios for causal link prediction tasks: knowledge base
completion, covariance matrix estimation and consumer-product recommendations.
Related papers
- Influence of Backdoor Paths on Causal Link Prediction [0.0]
CausalLPBack is a novel approach to causal link prediction that eliminates backdoor paths and uses knowledge graph link prediction methods.
The evaluation involves a unique dataset splitting method called the Markov-based split that's relevant for causal link prediction.
arXiv Detail & Related papers (2024-09-12T22:16:36Z) - CausalLP: Learning causal relations with weighted knowledge graph link prediction [5.3454230926797734]
CausalLP formulates the issue of incomplete causal networks as a knowledge graph completion problem.
The use of knowledge graphs to represent causal relations enables the integration of external domain knowledge.
Two primary tasks are supported by CausalLP: causal explanation and causal prediction.
arXiv Detail & Related papers (2024-04-23T20:50:06Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Variational Disentangled Graph Auto-Encoders for Link Prediction [10.390861526194662]
This paper proposes a novel framework with two variants, the disentangled graph auto-encoder (DGAE) and the variational disentangled graph auto-encoder (VDGAE)
The proposed framework infers the latent factors that cause edges in the graph and disentangles the representation into multiple channels corresponding to unique latent factors.
arXiv Detail & Related papers (2023-06-20T06:25:05Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Autoregressive flow-based causal discovery and inference [4.83420384410068]
Autoregressive flow models are well-suited to performing a range of causal inference tasks.
We exploit the fact that autoregressive architectures define an ordering over variables, analogous to a causal ordering.
We present examples over synthetic data where autoregressive flows, when trained under the correct causal ordering, are able to make accurate interventional and counterfactual predictions.
arXiv Detail & Related papers (2020-07-18T10:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.