CausalGraph2LLM: Evaluating LLMs for Causal Queries
- URL: http://arxiv.org/abs/2410.15939v1
- Date: Mon, 21 Oct 2024 12:12:21 GMT
- Title: CausalGraph2LLM: Evaluating LLMs for Causal Queries
- Authors: Ivaxi Sheth, Bahare Fatemi, Mario Fritz,
- Abstract summary: Causality is essential in scientific research, enabling researchers to interpret true relationships between variables.
With the recent advancements in Large Language Models (LLMs), there is an increasing interest in exploring their capabilities in causal reasoning.
- Score: 49.337170619608145
- License:
- Abstract: Causality is essential in scientific research, enabling researchers to interpret true relationships between variables. These causal relationships are often represented by causal graphs, which are directed acyclic graphs. With the recent advancements in Large Language Models (LLMs), there is an increasing interest in exploring their capabilities in causal reasoning and their potential use to hypothesize causal graphs. These tasks necessitate the LLMs to encode the causal graph effectively for subsequent downstream tasks. In this paper, we propose a comprehensive benchmark, \emph{CausalGraph2LLM}, encompassing a variety of causal graph settings to assess the causal graph understanding capability of LLMs. We categorize the causal queries into two types: graph-level and node-level queries. We benchmark both open-sourced and closed models for our study. Our findings reveal that while LLMs show promise in this domain, they are highly sensitive to the encoding used. Even capable models like GPT-4 and Gemini-1.5 exhibit sensitivity to encoding, with deviations of about $60\%$. We further demonstrate this sensitivity for downstream causal intervention tasks. Moreover, we observe that LLMs can often display biases when presented with contextual information about a causal graph, potentially stemming from their parametric memory.
Related papers
- How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Revisiting the Graph Reasoning Ability of Large Language Models: Case Studies in Translation, Connectivity and Shortest Path [53.71787069694794]
We focus on the graph reasoning ability of Large Language Models (LLMs)
We revisit the ability of LLMs on three fundamental graph tasks: graph description translation, graph connectivity, and the shortest-path problem.
Our findings suggest that LLMs can fail to understand graph structures through text descriptions and exhibit varying performance for all these fundamental tasks.
arXiv Detail & Related papers (2024-08-18T16:26:39Z) - Microstructures and Accuracy of Graph Recall by Large Language Models [5.96196600538754]
Graphs data is crucial for many applications.
Being able to accurately recall and encode a graph described in earlier text is a basic yet pivotal ability.
arXiv Detail & Related papers (2024-02-19T04:29:45Z) - Causal Inference Using LLM-Guided Discovery [34.040996887499425]
We show that the topological order over graph variables (causal order) alone suffices for causal effect inference.
We propose a robust technique of obtaining causal order from Large Language Models (LLMs)
Our approach significantly improves causal ordering accuracy as compared to discovery algorithms.
arXiv Detail & Related papers (2023-10-23T17:23:56Z) - Can Large Language Models Infer Causation from Correlation? [104.96351414570239]
We test the pure causal inference skills of large language models (LLMs)
We formulate a novel task Corr2Cause, which takes a set of correlational statements and determines the causal relationship between the variables.
We show that these models achieve almost close to random performance on the task.
arXiv Detail & Related papers (2023-06-09T12:09:15Z) - Can large language models build causal graphs? [54.74910640970968]
Large language models (LLMs) represent an opportunity to ease the process of building causal graphs.
LLMs have been shown to be brittle to the choice of probing words, context, and prompts that the user employs.
arXiv Detail & Related papers (2023-03-07T22:05:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.