Can large language models build causal graphs?
- URL: http://arxiv.org/abs/2303.05279v2
- Date: Fri, 23 Feb 2024 14:40:15 GMT
- Title: Can large language models build causal graphs?
- Authors: Stephanie Long, Tibor Schuster, Alexandre Pich\'e
- Abstract summary: Large language models (LLMs) represent an opportunity to ease the process of building causal graphs.
LLMs have been shown to be brittle to the choice of probing words, context, and prompts that the user employs.
- Score: 54.74910640970968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building causal graphs can be a laborious process. To ensure all relevant
causal pathways have been captured, researchers often have to discuss with
clinicians and experts while also reviewing extensive relevant medical
literature. By encoding common and medical knowledge, large language models
(LLMs) represent an opportunity to ease this process by automatically scoring
edges (i.e., connections between two variables) in potential graphs. LLMs
however have been shown to be brittle to the choice of probing words, context,
and prompts that the user employs. In this work, we evaluate if LLMs can be a
useful tool in complementing causal graph development.
Related papers
- CausalGraph2LLM: Evaluating LLMs for Causal Queries [49.337170619608145]
Causality is essential in scientific research, enabling researchers to interpret true relationships between variables.
With the recent advancements in Large Language Models (LLMs), there is an increasing interest in exploring their capabilities in causal reasoning.
arXiv Detail & Related papers (2024-10-21T12:12:21Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Graph Reasoning with Large Language Models via Pseudo-code Prompting [25.469214467011362]
This paper investigates whether prompting via pseudo-code instructions can improve the performance of large language models (LLMs) in solving graph problems.
Our experiments demonstrate that using pseudo-code instructions generally improves the performance of all considered LLMs.
arXiv Detail & Related papers (2024-09-26T14:52:40Z) - Debate on Graph: a Flexible and Reliable Reasoning Framework for Large Language Models [33.662269036173456]
Large Language Models (LLMs) may suffer from hallucinations in real-world applications due to the lack of relevant knowledge.
Knowledge Graph Question Answering (KGQA) serves as a critical touchstone for the integration.
We propose an interactive KGQA framework that leverages the interactive learning capabilities of LLMs to perform reasoning and Debating over Graphs (DoG)
arXiv Detail & Related papers (2024-09-05T01:11:58Z) - Revisiting the Graph Reasoning Ability of Large Language Models: Case Studies in Translation, Connectivity and Shortest Path [53.71787069694794]
We focus on the graph reasoning ability of Large Language Models (LLMs)
We revisit the ability of LLMs on three fundamental graph tasks: graph description translation, graph connectivity, and the shortest-path problem.
Our findings suggest that LLMs can fail to understand graph structures through text descriptions and exhibit varying performance for all these fundamental tasks.
arXiv Detail & Related papers (2024-08-18T16:26:39Z) - Zero-shot Causal Graph Extrapolation from Text via LLMs [50.596179963913045]
We evaluate the ability of large language models (LLMs) to infer causal relations from natural language.
LLMs show competitive performance in a benchmark of pairwise relations without needing (explicit) training samples.
We extend our approach to extrapolating causal graphs through iterated pairwise queries.
arXiv Detail & Related papers (2023-12-22T13:14:38Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - Applying Large Language Models for Causal Structure Learning in Non
Small Cell Lung Cancer [8.248361703850774]
Causal discovery is becoming a key part in medical AI research.
In this paper, we investigate applying Large Language Models to the problem of determining the directionality of edges in causal discovery.
Our result shows that LLMs can accurately predict the directionality of edges in causal graphs, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T09:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.