A General Framework for Constraint-based Causal Learning
- URL: http://arxiv.org/abs/2408.07575v1
- Date: Wed, 14 Aug 2024 14:16:02 GMT
- Title: A General Framework for Constraint-based Causal Learning
- Authors: Kai Z. Teh, Kayvan Sadeghi, Terry Soo,
- Abstract summary: This provides a general framework to obtain correctness conditions for causal learning.
We show that the sparsest Markov representation condition is the weakest correctness condition resulting from existing notions of minimality for maximal ancestral graphs and directed acyclic graphs.
- Score: 3.031375888004876
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: By representing any constraint-based causal learning algorithm via a placeholder property, we decompose the correctness condition into a part relating the distribution and the true causal graph, and a part that depends solely on the distribution. This provides a general framework to obtain correctness conditions for causal learning, and has the following implications. We provide exact correctness conditions for the PC algorithm, which are then related to correctness conditions of some other existing causal discovery algorithms. We show that the sparsest Markov representation condition is the weakest correctness condition resulting from existing notions of minimality for maximal ancestral graphs and directed acyclic graphs. We also reason that additional knowledge than just Pearl-minimality is necessary for causal learning beyond faithfulness.
Related papers
- Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - Escaping limit cycles: Global convergence for constrained
nonconvex-nonconcave minimax problems [46.71914490521821]
This paper introduces a new extragradient-type algorithm for a class of non-nonconcave minimax problems.
The proposed algorithm is applicable to constrained and regularized problems.
arXiv Detail & Related papers (2023-02-20T08:37:08Z) - Characterization and Learning of Causal Graphs with Small Conditioning
Sets [6.52423450125622]
Constraint-based causal discovery algorithms learn part of the causal graph structure by systematically testing conditional independences.
We propose a novel representation that allows us to graphically characterize $k$-Markov equivalence between two causal graphs.
We conduct synthetic, and semi-synthetic experiments to demonstrate that the $k$-PC algorithm enables more robust causal discovery in the small sample regime.
arXiv Detail & Related papers (2023-01-22T00:24:22Z) - Partial Disentanglement via Mechanism Sparsity [25.791043728989937]
Disentanglement via mechanism sparsity was introduced as a principled approach to extract latent factors without supervision.
We introduce a generalization of this theory which applies to any ground-truth graph.
We show how disentangled the learned representation is expected to be, via a new equivalence relation over models we call consistency.
arXiv Detail & Related papers (2022-07-15T20:06:12Z) - Conditions and Assumptions for Constraint-based Causal Structure
Learning [2.741266294612776]
The paper formalizes constraint-based structure learning of the "true" causal graph from observed data.
We provide the theory for the general class of models under the assumption that the distribution is Markovian to the true causal graph.
arXiv Detail & Related papers (2021-03-24T23:08:00Z) - On the Fundamental Limits of Exact Inference in Structured Prediction [34.978150480870696]
Inference is a main task in structured prediction and it is naturally modeled with a graph.
The goal of exact inference is to recover the unknown true label for each node precisely.
We show that there exists a gap between the fundamental limits and the performance of the computationally tractable method of Bello and Honorio.
arXiv Detail & Related papers (2021-02-17T17:44:21Z) - Lexicographically Fair Learning: Algorithms and Generalization [13.023987750303856]
Lexifairness asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized.
We derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting.
arXiv Detail & Related papers (2021-02-16T21:15:42Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.