A General Framework on Conditions for Constraint-based Causal Learning
- URL: http://arxiv.org/abs/2408.07575v2
- Date: Mon, 30 Jun 2025 13:16:49 GMT
- Title: A General Framework on Conditions for Constraint-based Causal Learning
- Authors: Kai Z. Teh, Kayvan Sadeghi, Terry Soo,
- Abstract summary: Most constraint-based causal learning algorithms provably return the correct causal graph under certain correctness conditions, such as faithfulness.<n>We provide a general framework to obtain and study correctness conditions for these algorithms.
- Score: 3.031375888004876
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Most constraint-based causal learning algorithms provably return the correct causal graph under certain correctness conditions, such as faithfulness. By representing any constraint-based causal learning algorithm using the notion of a property, we provide a general framework to obtain and study correctness conditions for these algorithms. From the framework, we provide exact correctness conditions for the PC algorithm, which are then related to the correctness conditions of some other existing causal discovery algorithms. The framework also suggests a paradigm for designing causal learning algorithms which allows for the correctness conditions of algorithms to be controlled for before designing the actual algorithm, and has the following implications. We show that the sparsest Markov representation condition is the weakest correctness condition for algorithms that output ancestral graphs or directed acyclic graphs satisfying any existing notions of minimality. We also reason that Pearl-minimality is necessary for meaningful causal learning but not sufficient to relax the faithfulness condition and, as such, has to be strengthened, such as by including background knowledge, for causal learning beyond faithfulness.
Related papers
- Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure [51.4953549382486]
We study the greedy (exploitation-only) algorithm in bandit problems with a known reward structure.<n>Our characterization extends to contextual bandits and interactive decision-making with arbitrary feedback.
arXiv Detail & Related papers (2025-03-06T01:51:11Z) - Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - Loss Balancing for Fair Supervised Learning [20.13250413610897]
Supervised learning models have been used in various domains such as lending, college admission, face recognition, natural language processing, etc.
Various notions have been proposed to address the unfairness predictor on the learning process (EL)
arXiv Detail & Related papers (2023-11-07T04:36:13Z) - Escaping limit cycles: Global convergence for constrained
nonconvex-nonconcave minimax problems [46.71914490521821]
This paper introduces a new extragradient-type algorithm for a class of non-nonconcave minimax problems.
The proposed algorithm is applicable to constrained and regularized problems.
arXiv Detail & Related papers (2023-02-20T08:37:08Z) - Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints [117.8317521974783]
We consider non-clairvoyant scheduling with online precedence constraints.
An algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed.
arXiv Detail & Related papers (2023-01-30T13:17:15Z) - Characterization and Learning of Causal Graphs with Small Conditioning
Sets [6.52423450125622]
Constraint-based causal discovery algorithms learn part of the causal graph structure by systematically testing conditional independences.
We propose a novel representation that allows us to graphically characterize $k$-Markov equivalence between two causal graphs.
We conduct synthetic, and semi-synthetic experiments to demonstrate that the $k$-PC algorithm enables more robust causal discovery in the small sample regime.
arXiv Detail & Related papers (2023-01-22T00:24:22Z) - Partial Disentanglement via Mechanism Sparsity [25.791043728989937]
Disentanglement via mechanism sparsity was introduced as a principled approach to extract latent factors without supervision.
We introduce a generalization of this theory which applies to any ground-truth graph.
We show how disentangled the learned representation is expected to be, via a new equivalence relation over models we call consistency.
arXiv Detail & Related papers (2022-07-15T20:06:12Z) - Conditions and Assumptions for Constraint-based Causal Structure
Learning [2.741266294612776]
The paper formalizes constraint-based structure learning of the "true" causal graph from observed data.
We provide the theory for the general class of models under the assumption that the distribution is Markovian to the true causal graph.
arXiv Detail & Related papers (2021-03-24T23:08:00Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - On the Fundamental Limits of Exact Inference in Structured Prediction [34.978150480870696]
Inference is a main task in structured prediction and it is naturally modeled with a graph.
The goal of exact inference is to recover the unknown true label for each node precisely.
We show that there exists a gap between the fundamental limits and the performance of the computationally tractable method of Bello and Honorio.
arXiv Detail & Related papers (2021-02-17T17:44:21Z) - Lexicographically Fair Learning: Algorithms and Generalization [13.023987750303856]
Lexifairness asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized.
We derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting.
arXiv Detail & Related papers (2021-02-16T21:15:42Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - Verification and Validation of Convex Optimization Algorithms for Model
Predictive Control [1.5322124183968633]
This article discusses the formal verification of the Ellipsoid method, a convex optimization algorithm, and its code implementation.
The applicability and limitations of those code properties and proofs are presented as well.
Modifications to the algorithm are presented which can be used to control its numerical stability.
arXiv Detail & Related papers (2020-05-26T09:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.