Causal Inference Using LLM-Guided Discovery
- URL: http://arxiv.org/abs/2310.15117v1
- Date: Mon, 23 Oct 2023 17:23:56 GMT
- Title: Causal Inference Using LLM-Guided Discovery
- Authors: Aniket Vashishtha, Abbavaram Gowtham Reddy, Abhinav Kumar, Saketh
Bachu, Vineeth N Balasubramanian, Amit Sharma
- Abstract summary: We show that the topological order over graph variables (causal order) alone suffices for causal effect inference.
We propose a robust technique of obtaining causal order from Large Language Models (LLMs)
Our approach significantly improves causal ordering accuracy as compared to discovery algorithms.
- Score: 34.040996887499425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: At the core of causal inference lies the challenge of determining reliable
causal graphs solely based on observational data. Since the well-known backdoor
criterion depends on the graph, any errors in the graph can propagate
downstream to effect inference. In this work, we initially show that complete
graph information is not necessary for causal effect inference; the topological
order over graph variables (causal order) alone suffices. Further, given a node
pair, causal order is easier to elicit from domain experts compared to graph
edges since determining the existence of an edge can depend extensively on
other variables. Interestingly, we find that the same principle holds for Large
Language Models (LLMs) such as GPT-3.5-turbo and GPT-4, motivating an automated
method to obtain causal order (and hence causal effect) with LLMs acting as
virtual domain experts. To this end, we employ different prompting strategies
and contextual cues to propose a robust technique of obtaining causal order
from LLMs. Acknowledging LLMs' limitations, we also study possible techniques
to integrate LLMs with established causal discovery algorithms, including
constraint-based and score-based methods, to enhance their performance.
Extensive experiments demonstrate that our approach significantly improves
causal ordering accuracy as compared to discovery algorithms, highlighting the
potential of LLMs to enhance causal inference across diverse fields.
Related papers
- Learning to Defer for Causal Discovery with Imperfect Experts [59.071731337922664]
We propose L2D-CD, a method for gauging the correctness of expert recommendations and optimally combining them with data-driven causal discovery results.
We evaluate L2D-CD on the canonical T"ubingen pairs dataset and demonstrate its superior performance compared to both the causal discovery method and the expert used in isolation.
arXiv Detail & Related papers (2025-02-18T18:55:53Z) - Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs Reasoning [73.2950349728376]
Large language models (LLMs) have demonstrated remarkable success across a wide range of tasks.
However, they still encounter challenges in reasoning tasks that require understanding and inferring relationships between pieces of information.
This challenge is particularly pronounced in tasks involving multi-step processes, such as logical reasoning and multi-hop question answering.
We propose Reasoning with Graphs (RwG) by first constructing explicit graphs from the context.
arXiv Detail & Related papers (2025-01-14T05:18:20Z) - Discovery of Maximally Consistent Causal Orders with Large Language Models [0.8192907805418583]
Causal discovery is essential for understanding complex systems.
Traditional methods often rely on strong, untestable assumptions.
We propose a novel method to derive a class of acyclic tournaments.
arXiv Detail & Related papers (2024-12-18T16:37:51Z) - Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation [68.58373854950294]
We focus on causal reasoning and address the task of establishing causal relationships based on correlation information.
We introduce a prompting strategy for this problem that breaks the original task into fixed subquestions.
We evaluate our approach on an existing causal benchmark, Corr2Cause.
arXiv Detail & Related papers (2024-12-18T15:32:27Z) - CausalGraph2LLM: Evaluating LLMs for Causal Queries [49.337170619608145]
CausalGraph2LLM is a benchmark comprising over 700k queries across diverse causal graph settings.
Our findings reveal that while LLMs show promise in this domain, they are highly sensitive to the encoding used.
arXiv Detail & Related papers (2024-10-21T12:12:21Z) - Large Language Models are Effective Priors for Causal Graph Discovery [6.199818486385127]
Causal structure discovery from observations can be improved by integrating background knowledge provided by an expert to reduce the hypothesis space.
Recently, Large Language Models (LLMs) have begun to be considered as sources of prior information given the low cost of querying them relative to a human expert.
arXiv Detail & Related papers (2024-05-22T11:39:11Z) - ALCM: Autonomous LLM-Augmented Causal Discovery Framework [2.1470800327528843]
We introduce a new framework, named Autonomous LLM-Augmented Causal Discovery Framework (ALCM), to synergize data-driven causal discovery algorithms and Large Language Models.
The ALCM consists of three integral components: causal structure learning, causal wrapper, and LLM-driven causal refiner.
We evaluate the ALCM framework by implementing two demonstrations on seven well-known datasets.
arXiv Detail & Related papers (2024-05-02T21:27:45Z) - Redefining the Shortest Path Problem Formulation of the Linear Non-Gaussian Acyclic Model: Pairwise Likelihood Ratios, Prior Knowledge, and Path Enumeration [0.0]
The paper proposes a threefold enhancement to the LiNGAM-SPP framework.
The need for parameter tuning is eliminated by using the pairwise likelihood ratio in lieu of kNN-based mutual information.
The incorporation of prior knowledge is then enabled by a node-skipping strategy implemented on the graph representation of all causal orderings.
arXiv Detail & Related papers (2024-04-18T05:59:28Z) - Zero-shot Causal Graph Extrapolation from Text via LLMs [50.596179963913045]
We evaluate the ability of large language models (LLMs) to infer causal relations from natural language.
LLMs show competitive performance in a benchmark of pairwise relations without needing (explicit) training samples.
We extend our approach to extrapolating causal graphs through iterated pairwise queries.
arXiv Detail & Related papers (2023-12-22T13:14:38Z) - Causal Reasoning and Large Language Models: Opening a New Frontier for Causality [29.433401785920065]
Large language models (LLMs) can generate causal arguments with high probability.
LLMs may be used by human domain experts to save effort in setting up a causal analysis.
arXiv Detail & Related papers (2023-04-28T19:00:43Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.