Large Language Models are Effective Priors for Causal Graph Discovery
- URL: http://arxiv.org/abs/2405.13551v1
- Date: Wed, 22 May 2024 11:39:11 GMT
- Title: Large Language Models are Effective Priors for Causal Graph Discovery
- Authors: Victor-Alexandru Darvariu, Stephen Hailes, Mirco Musolesi,
- Abstract summary: Causal structure discovery from observations can be improved by integrating background knowledge provided by an expert to reduce the hypothesis space.
Recently, Large Language Models (LLMs) have begun to be considered as sources of prior information given the low cost of querying them relative to a human expert.
- Score: 6.199818486385127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal structure discovery from observations can be improved by integrating background knowledge provided by an expert to reduce the hypothesis space. Recently, Large Language Models (LLMs) have begun to be considered as sources of prior information given the low cost of querying them relative to a human expert. In this work, firstly, we propose a set of metrics for assessing LLM judgments for causal graph discovery independently of the downstream algorithm. Secondly, we systematically study a set of prompting designs that allows the model to specify priors about the structure of the causal graph. Finally, we present a general methodology for the integration of LLM priors in graph discovery algorithms, finding that they help improve performance on common-sense benchmarks and especially when used for assessing edge directionality. Our work highlights the potential as well as the shortcomings of the use of LLMs in this problem space.
Related papers
- Using Large Language Models for Expert Prior Elicitation in Predictive Modelling [53.54623137152208]
This study proposes using large language models (LLMs) to elicit expert prior distributions for predictive models.
We compare LLM-elicited and uninformative priors, evaluate whether LLMs truthfully generate parameter distributions, and propose a model selection strategy for in-context learning and prior elicitation.
Our findings show that LLM-elicited prior parameter distributions significantly reduce predictive error compared to uninformative priors in low-data settings.
arXiv Detail & Related papers (2024-11-26T10:13:39Z) - LLM-initialized Differentiable Causal Discovery [0.0]
Differentiable causal discovery (DCD) methods are effective in uncovering causal relationships from observational data.
However, these approaches often suffer from limited interpretability and face challenges in incorporating domain-specific prior knowledge.
We propose Large Language Models (LLMs)-based causal discovery approaches that provide useful priors but struggle with formal causal reasoning.
arXiv Detail & Related papers (2024-10-28T15:43:31Z) - Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors [74.04775677110179]
In-context Learning (ICL) has become the primary method for performing natural language tasks with Large Language Models (LLMs)
In this work, we examine whether this is the result of the aggregation used in corresponding datasets, where trying to combine low-agreement, disparate annotations might lead to annotation artifacts that create detrimental noise in the prompt.
Our results indicate that aggregation is a confounding factor in the modeling of subjective tasks, and advocate focusing on modeling individuals instead.
arXiv Detail & Related papers (2024-10-17T17:16:00Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - Causal Graph Discovery with Retrieval-Augmented Generation based Large Language Models [23.438388321411693]
Causal graph recovery is traditionally done using statistical estimation-based methods or based on individual's knowledge about variables of interests.
We propose a novel method that leverages large language models (LLMs) to deduce causal relationships in general causal graph recovery tasks.
arXiv Detail & Related papers (2024-02-23T13:02:10Z) - Efficient Causal Graph Discovery Using Large Language Models [42.724534747353665]
The proposed framework uses a breadth-first search (BFS) approach which allows it to use only a linear number of queries.
In addition to being more time and data-efficient, the proposed framework achieves state-of-the-art results on real-world causal graphs of varying sizes.
arXiv Detail & Related papers (2024-02-02T08:25:32Z) - Causal Inference Using LLM-Guided Discovery [34.040996887499425]
We show that the topological order over graph variables (causal order) alone suffices for causal effect inference.
We propose a robust technique of obtaining causal order from Large Language Models (LLMs)
Our approach significantly improves causal ordering accuracy as compared to discovery algorithms.
arXiv Detail & Related papers (2023-10-23T17:23:56Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - To Know by the Company Words Keep and What Else Lies in the Vicinity [0.0]
We introduce an analytic model of the statistics learned by seminal algorithms, including GloVe and Word2Vec.
We derive -- to the best of our knowledge -- the first known solution to Word2Vec's softmax-optimized, skip-gram algorithm.
arXiv Detail & Related papers (2022-04-30T03:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.