Leveraging Large Language Models for Causal Discovery: a Constraint-based, Argumentation-driven Approach
- URL: http://arxiv.org/abs/2602.16481v1
- Date: Wed, 18 Feb 2026 14:15:21 GMT
- Title: Leveraging Large Language Models for Causal Discovery: a Constraint-based, Argumentation-driven Approach
- Authors: Zihao Li, Fabrizio Russo,
- Abstract summary: Causal Assumption-based Argumentation (ABA) is a framework that uses symbolic reasoning to ensure correspondence between input constraints and output graphs.<n>We explore the use of large language models (LLMs) as imperfect experts for Causal ABA, eliciting semantic structural priors from variable names and descriptions.<n> Experiments on standard benchmarks and semantically grounded synthetic graphs demonstrate state-of-the-art performance.
- Score: 9.175642602891939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal discovery seeks to uncover causal relations from data, typically represented as causal graphs, and is essential for predicting the effects of interventions. While expert knowledge is required to construct principled causal graphs, many statistical methods have been proposed to leverage observational data with varying formal guarantees. Causal Assumption-based Argumentation (ABA) is a framework that uses symbolic reasoning to ensure correspondence between input constraints and output graphs, while offering a principled way to combine data and expertise. We explore the use of large language models (LLMs) as imperfect experts for Causal ABA, eliciting semantic structural priors from variable names and descriptions and integrating them with conditional-independence evidence. Experiments on standard benchmarks and semantically grounded synthetic graphs demonstrate state-of-the-art performance, and we additionally introduce an evaluation protocol to mitigate memorisation bias when assessing LLMs for causal discovery.
Related papers
- Use What You Know: Causal Foundation Models with Partial Graphs [97.91863420927866]
Recently proposed Causal Foundation Models (CFMs) promise a more unified approach by amortising causal discovery and inference in a single step.<n>We bridge this gap by introducing methods to condition CFMs on causal information, such as the causal graph or more readily available ancestral information.
arXiv Detail & Related papers (2026-02-16T17:56:37Z) - Causal SHAP: Feature Attribution with Dependency Awareness through Causal Discovery [3.717095609283206]
Causal SHAP is a novel framework that integrates causal relationships into feature attribution.<n>This study contributes to the field of Explainable AI (XAI) by providing a practical framework for causal-aware model explanations.
arXiv Detail & Related papers (2025-08-31T13:31:34Z) - Preference Learning for AI Alignment: a Causal Perspective [55.2480439325792]
We frame this problem in a causal paradigm, providing the rich toolbox of causality to identify persistent challenges.<n>Inheriting from the literature of causal inference, we identify key assumptions necessary for reliable generalisation.<n>We illustrate failure modes of naive reward models and demonstrate how causally-inspired approaches can improve model robustness.
arXiv Detail & Related papers (2025-06-06T10:45:42Z) - Retrieving Classes of Causal Orders with Inconsistent Knowledge Bases [0.8192907805418583]
Large Language Models (LLMs) have emerged as a promising alternative for extracting causal knowledge from text-based metadata.<n>LLMs tend to be unreliable and prone to hallucinations, necessitating strategies that account for their limitations.<n>We present a new method to derive a class of acyclic tournaments, which represent plausible causal orders.
arXiv Detail & Related papers (2024-12-18T16:37:51Z) - Large Language Models for Constrained-Based Causal Discovery [4.858756226945995]
Causality is essential for understanding complex systems, such as the economy, the brain, and the climate.
This work explores the capabilities of Large Language Models (LLMs) as an alternative to domain experts for causal graph generation.
arXiv Detail & Related papers (2024-06-11T15:45:24Z) - Prompting or Fine-tuning? Exploring Large Language Models for Causal Graph Validation [0.0]
This study explores the capability of Large Language Models to evaluate causality in causal graphs.<n>Our study compares two approaches: (1) prompting-based method for zero-shot and few-shot causal inference and, (2) fine-tuning language models for the causal relation prediction task.
arXiv Detail & Related papers (2024-05-29T09:06:18Z) - Argumentative Causal Discovery [13.853426822028975]
Causal discovery amounts to unearthing causal relationships amongst features in data.
We deploy assumption-based argumentation (ABA) to learn graphs which reflect causal dependencies in the data.
We prove that our method exhibits desirable properties, notably that, under natural conditions, it can retrieve ground-truth causal graphs.
arXiv Detail & Related papers (2024-05-18T10:34:34Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.