Expert-Aided Causal Discovery of Ancestral Graphs
- URL: http://arxiv.org/abs/2309.12032v3
- Date: Fri, 10 Oct 2025 19:53:40 GMT
- Title: Expert-Aided Causal Discovery of Ancestral Graphs
- Authors: Tiago da Silva, Bruna Bazaluk, Eliezer de Souza da Silva, António Góis, Dominik Heider, Samuel Kaski, Diego Mesquita, Adèle Helena Ribeiro,
- Abstract summary: Causal discovery algorithms are notably brittle when data is scarce.<n>The lack of uncertainty in most CD methods hinders users from diagnosing and refining results.<n>We introduce Ancestral GFlowNets (AGFNs) to address these issues.
- Score: 25.044662541801618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal discovery (CD) algorithms are notably brittle when data is scarce, inferring unreliable causal relations that may contradict expert knowledge, especially when considering latent confounders. Furthermore, the lack of uncertainty quantification in most CD methods hinders users from diagnosing and refining results. To address these issues, we introduce Ancestral GFlowNets (AGFNs). AGFN samples ancestral graphs (AGs) proportionally to a score-based belief distribution representing our epistemic uncertainty over the causal relationships. Building upon this distribution, we propose an elicitation framework for expert-driven assessment. This framework comprises an optimal experimental design to probe the expert and a scheme to incorporate the obtained feedback into AGFN. Our experiments show that: i) AGFN is competitive against other methods that address latent confounding on both synthetic and real-world datasets; and ii) our design for incorporating feedback from a (simulated) human expert or a Large Language Model (LLM) improves inference quality.
Related papers
- Leveraging Large Language Models for Causal Discovery: a Constraint-based, Argumentation-driven Approach [9.175642602891939]
Causal Assumption-based Argumentation (ABA) is a framework that uses symbolic reasoning to ensure correspondence between input constraints and output graphs.<n>We explore the use of large language models (LLMs) as imperfect experts for Causal ABA, eliciting semantic structural priors from variable names and descriptions.<n> Experiments on standard benchmarks and semantically grounded synthetic graphs demonstrate state-of-the-art performance.
arXiv Detail & Related papers (2026-02-18T14:15:21Z) - dcFCI: Robust Causal Discovery Under Latent Confounding, Unfaithfulness, and Mixed Data [1.9797215742507548]
We introduce the first nonparametric score to assess a Partial Ancestral Graph's compatibility with observed data.<n>We then propose data-compatible Fast Causal Inference (dcFCI) to jointly address latent confounding, empirical unfaithfulness, and mixed data types.
arXiv Detail & Related papers (2025-05-10T07:05:19Z) - Federated Causal Inference in Healthcare: Methods, Challenges, and Applications [21.843379449376172]
Federated causal inference enables multi-site treatment effect estimation without sharing individual-level data.<n>We present a comprehensive review and theoretical analysis of federated causal effect estimation across both binary/continuous and time-to-event outcomes.<n>We conclude by outlining opportunities, challenges, and future directions for scalable, fair, and trustworthy federated causal inference in distributed healthcare systems.
arXiv Detail & Related papers (2025-05-04T20:30:11Z) - Learning to Defer for Causal Discovery with Imperfect Experts [59.071731337922664]
We propose L2D-CD, a method for gauging the correctness of expert recommendations and optimally combining them with data-driven causal discovery results.
We evaluate L2D-CD on the canonical T"ubingen pairs dataset and demonstrate its superior performance compared to both the causal discovery method and the expert used in isolation.
arXiv Detail & Related papers (2025-02-18T18:55:53Z) - Image Quality Assessment: Investigating Causal Perceptual Effects with Abductive Counterfactual Inference [22.65765161695905]
Existing full-reference image quality assessment (FR-IQA) methods often fail to capture the complex causal mechanisms that underlie human perceptual responses to image distortions.<n>We propose an FR-IQA method based on abductive counterfactual inference to investigate the causal relationships between deep network features and perceptual distortions.
arXiv Detail & Related papers (2024-12-22T09:17:57Z) - Challenges and Considerations in the Evaluation of Bayesian Causal Discovery [49.0053848090947]
Representing uncertainty in causal discovery is a crucial component for experimental design, and more broadly, for safe and reliable causal decision making.
Unlike non-Bayesian causal discovery, which relies on a single estimated causal graph and model parameters for assessment, causal discovery presents challenges due to the nature of its quantity.
No consensus on the most suitable metric for evaluation.
arXiv Detail & Related papers (2024-06-05T12:45:23Z) - Trust Your $\
abla$: Gradient-based Intervention Targeting for Causal Discovery [49.084423861263524]
In this work, we propose a novel Gradient-based Intervention Targeting method, abbreviated GIT.
GIT 'trusts' the gradient estimator of a gradient-based causal discovery framework to provide signals for the intervention acquisition function.
We provide extensive experiments in simulated and real-world datasets and demonstrate that GIT performs on par with competitive baselines.
arXiv Detail & Related papers (2022-11-24T17:04:45Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Evaluating the Adversarial Robustness for Fourier Neural Operators [78.36413169647408]
Fourier Neural Operator (FNO) was the first to simulate turbulent flow with zero-shot super-resolution.
We generate adversarial examples for FNO based on norm-bounded data input perturbations.
Our results show that the model's robustness degrades rapidly with increasing perturbation levels.
arXiv Detail & Related papers (2022-04-08T19:19:42Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Bayesian Model Averaging for Data Driven Decision Making when Causality
is Partially Known [0.0]
We use ensemble methods like Bayesian Model Averaging (BMA) to infer set of causal graphs.
We provide decisions by computing the expected value and risk of potential interventions explicitly.
arXiv Detail & Related papers (2021-05-12T01:55:45Z) - FRITL: A Hybrid Method for Causal Discovery in the Presence of Latent
Confounders [46.31784571870808]
We show that under some mild assumptions, the model is uniquely identified by a hybrid method.
Our method leverages the advantages of constraint-based methods and independent noise-based methods to handle both confounded and unconfounded situations.
arXiv Detail & Related papers (2021-03-26T03:12:14Z) - MissDeepCausal: Causal Inference from Incomplete Data Using Deep Latent
Variable Models [14.173184309520453]
State-of-the-art methods for causal inference don't consider missing values.
Missing data require an adapted unconfoundedness hypothesis.
Latent confounders whose distribution is learned through variational autoencoders adapted to missing values are considered.
arXiv Detail & Related papers (2020-02-25T12:58:07Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.