Can large language models build causal graphs?
- URL: http://arxiv.org/abs/2303.05279v2
- Date: Fri, 23 Feb 2024 14:40:15 GMT
- Title: Can large language models build causal graphs?
- Authors: Stephanie Long, Tibor Schuster, Alexandre Pich\'e
- Abstract summary: Large language models (LLMs) represent an opportunity to ease the process of building causal graphs.
LLMs have been shown to be brittle to the choice of probing words, context, and prompts that the user employs.
- Score: 54.74910640970968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building causal graphs can be a laborious process. To ensure all relevant
causal pathways have been captured, researchers often have to discuss with
clinicians and experts while also reviewing extensive relevant medical
literature. By encoding common and medical knowledge, large language models
(LLMs) represent an opportunity to ease this process by automatically scoring
edges (i.e., connections between two variables) in potential graphs. LLMs
however have been shown to be brittle to the choice of probing words, context,
and prompts that the user employs. In this work, we evaluate if LLMs can be a
useful tool in complementing causal graph development.
Related papers
- Causal Graphs Meet Thoughts: Enhancing Complex Reasoning in Graph-Augmented LLMs [4.701165676405066]
It is critical not only to retrieve relevant information but also to provide causal reasoning and explainability.
This paper proposes a novel pipeline that filters large knowledge graphs to emphasize cause-effect edges.
Experiments on medical question-answering tasks show consistent gains, with up to a 10% absolute improvement.
arXiv Detail & Related papers (2025-01-24T19:31:06Z) - CausalGraph2LLM: Evaluating LLMs for Causal Queries [49.337170619608145]
CausalGraph2LLM is a benchmark comprising over 700k queries across diverse causal graph settings.
Our findings reveal that while LLMs show promise in this domain, they are highly sensitive to the encoding used.
arXiv Detail & Related papers (2024-10-21T12:12:21Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Graph Reasoning with Large Language Models via Pseudo-code Prompting [25.469214467011362]
This paper investigates whether prompting via pseudo-code instructions can improve the performance of large language models (LLMs) in solving graph problems.
Our experiments demonstrate that using pseudo-code instructions generally improves the performance of all considered LLMs.
arXiv Detail & Related papers (2024-09-26T14:52:40Z) - Revisiting the Graph Reasoning Ability of Large Language Models: Case Studies in Translation, Connectivity and Shortest Path [53.71787069694794]
We focus on the graph reasoning ability of Large Language Models (LLMs)
We revisit the ability of LLMs on three fundamental graph tasks: graph description translation, graph connectivity, and the shortest-path problem.
Our findings suggest that LLMs can fail to understand graph structures through text descriptions and exhibit varying performance for all these fundamental tasks.
arXiv Detail & Related papers (2024-08-18T16:26:39Z) - Zero-shot Causal Graph Extrapolation from Text via LLMs [50.596179963913045]
We evaluate the ability of large language models (LLMs) to infer causal relations from natural language.
LLMs show competitive performance in a benchmark of pairwise relations without needing (explicit) training samples.
We extend our approach to extrapolating causal graphs through iterated pairwise queries.
arXiv Detail & Related papers (2023-12-22T13:14:38Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - Applying Large Language Models for Causal Structure Learning in Non
Small Cell Lung Cancer [8.248361703850774]
Causal discovery is becoming a key part in medical AI research.
In this paper, we investigate applying Large Language Models to the problem of determining the directionality of edges in causal discovery.
Our result shows that LLMs can accurately predict the directionality of edges in causal graphs, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T09:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.