Causal Relational Learning
- URL: http://arxiv.org/abs/2004.03644v1
- Date: Tue, 7 Apr 2020 18:33:05 GMT
- Title: Causal Relational Learning
- Authors: Babak Salimi, Harsh Parikh, Moe Kayali, Sudeepa Roy, Lise Getoor, and
Dan Suciu
- Abstract summary: We propose a declarative language called CaRL for capturing causal background knowledge and assumptions.
CaRL provides a foundation for inferring causality and reasoning about the effect of complex interventions in relational domains.
- Score: 29.082088734252213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference is at the heart of empirical research in natural and social
sciences and is critical for scientific discovery and informed decision making.
The gold standard in causal inference is performing randomized controlled
trials; unfortunately these are not always feasible due to ethical, legal, or
cost constraints. As an alternative, methodologies for causal inference from
observational data have been developed in statistical studies and social
sciences. However, existing methods critically rely on restrictive assumptions
such as the study population consisting of homogeneous elements that can be
represented in a single flat table, where each row is referred to as a unit. In
contrast, in many real-world settings, the study domain naturally consists of
heterogeneous elements with complex relational structure, where the data is
naturally represented in multiple related tables. In this paper, we present a
formal framework for causal inference from such relational data. We propose a
declarative language called CaRL for capturing causal background knowledge and
assumptions and specifying causal queries using simple Datalog-like rules.CaRL
provides a foundation for inferring causality and reasoning about the effect of
complex interventions in relational domains. We present an extensive
experimental evaluation on real relational data to illustrate the applicability
of CaRL in social sciences and healthcare.
Related papers
- Causal Representation Learning in Temporal Data via Single-Parent Decoding [66.34294989334728]
Scientific research often seeks to understand the causal structure underlying high-level variables in a system.
Scientists typically collect low-level measurements, such as geographically distributed temperature readings.
We propose a differentiable method, Causal Discovery with Single-parent Decoding, that simultaneously learns the underlying latents and a causal graph over them.
arXiv Detail & Related papers (2024-10-09T15:57:50Z) - Unsupervised Pairwise Causal Discovery on Heterogeneous Data using Mutual Information Measures [49.1574468325115]
Causal Discovery is a technique that tackles the challenge by analyzing the statistical properties of the constituent variables.
We question the current (possibly misleading) baseline results on the basis that they were obtained through supervised learning.
In consequence, we approach this problem in an unsupervised way, using robust Mutual Information measures.
arXiv Detail & Related papers (2024-08-01T09:11:08Z) - Argumentative Causal Discovery [13.853426822028975]
Causal discovery amounts to unearthing causal relationships amongst features in data.
We deploy assumption-based argumentation (ABA) to learn graphs which reflect causal dependencies in the data.
We prove that our method exhibits desirable properties, notably that, under natural conditions, it can retrieve ground-truth causal graphs.
arXiv Detail & Related papers (2024-05-18T10:34:34Z) - A Versatile Causal Discovery Framework to Allow Causally-Related Hidden
Variables [28.51579090194802]
We introduce a novel framework for causal discovery that accommodates the presence of causally-related hidden variables almost everywhere in the causal network.
We develop a Rank-based Latent Causal Discovery algorithm, RLCD, that can efficiently locate hidden variables, determine their cardinalities, and discover the entire causal structure over both measured and hidden ones.
Experimental results on both synthetic and real-world personality data sets demonstrate the efficacy of the proposed approach in finite-sample cases.
arXiv Detail & Related papers (2023-12-18T07:57:39Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Discovery via Conditional Independence Testing with Proxy Variables [35.3493980628004]
The presence of unobserved variables, such as the latent confounder, can introduce bias in conditional independence testing.
We propose a novel hypothesis-testing procedure that can effectively examine the existence of the causal relationship over continuous variables.
arXiv Detail & Related papers (2023-05-09T09:08:39Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Typing assumptions improve identification in causal discovery [123.06886784834471]
Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified.
We propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables.
arXiv Detail & Related papers (2021-07-22T14:23:08Z) - Causal Inference in Geoscience and Remote Sensing from Observational
Data [9.800027003240674]
We try to estimate the correct direction of causation using a finite set of empirical data.
We illustrate performance in a collection of 28 geoscience causal inference problems.
The criterion achieves state-of-the-art detection rates in all cases, it is generally robust to noise sources and distortions.
arXiv Detail & Related papers (2020-12-07T22:56:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.