Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via
Prompt Augmented by ChatGPT
- URL: http://arxiv.org/abs/2304.11116v3
- Date: Thu, 11 May 2023 06:00:03 GMT
- Title: Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via
Prompt Augmented by ChatGPT
- Authors: Jiawei Zhang
- Abstract summary: We aim to develop a large language model (LLM) with the reasoning ability on complex graph data.
Inspired by the latest ChatGPT and Toolformer models, we propose the Graph-ToolFormer framework to teach LLMs themselves with prompts augmented by ChatGPT to use external graph reasoning API tools.
- Score: 10.879701971582502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we aim to develop a large language model (LLM) with the
reasoning ability on complex graph data. Currently, LLMs have achieved very
impressive performance on various natural language learning tasks, extensions
of which have also been applied to study the vision tasks with multi-modal
data. However, when it comes to the graph learning tasks, existing LLMs present
very serious flaws due to their several inherited weaknesses in performing
{multi-step logic reasoning}, {precise mathematical calculation} and
{perception about the spatial and temporal factors}.
To address such challenges, in this paper, we will investigate the
principles, methodologies and algorithms to empower existing LLMs with graph
reasoning ability, which will have tremendous impacts on the current research
of both LLMs and graph learning. Inspired by the latest ChatGPT and Toolformer
models, we propose the Graph-ToolFormer (Graph Reasoning oriented Toolformer)
framework to teach LLMs themselves with prompts augmented by ChatGPT to use
external graph reasoning API tools. Specifically, we will investigate to teach
Graph-ToolFormer to handle various graph data reasoning tasks in this paper,
including both (1) very basic graph data loading and graph property reasoning
tasks, ranging from simple graph order and size to the graph diameter and
periphery, and (2) more advanced reasoning tasks on real-world graph data, such
as bibliographic networks, protein molecules, sequential recommender systems,
social networks and knowledge graphs.
Related papers
- Graph Linearization Methods for Reasoning on Graphs with Large Language Models [25.3545522174459]
Graphs should be linearized to reflect certain properties of natural language text, such as local dependency and global alignment.
We develop several graph linearization methods based on graph centrality, degeneracy, and node relabeling schemes.
Our work introduces novel graph representations suitable for LLMs, contributing to the potential integration of graph machine learning with the trend of multi-modal processing.
arXiv Detail & Related papers (2024-10-25T11:51:37Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - GraphInstruct: Empowering Large Language Models with Graph Understanding and Reasoning Capability [28.713449421717193]
We evaluate and enhance the graph understanding abilities of large language models (LLMs)
In this paper, we propose a benchmark named GraphInstruct, which includes 21 classical graph reasoning tasks.
We construct GraphLM through efficient instruction-tuning, which shows prominent graph understanding capability.
arXiv Detail & Related papers (2024-03-07T13:36:08Z) - Can we Soft Prompt LLMs for Graph Learning Tasks? [22.286189757942054]
GraphPrompter is a framework designed to align graph information with Large Language Models (LLMs) via soft prompts.
The framework unveils the substantial capabilities of LLMs as predictors in graph-related tasks.
arXiv Detail & Related papers (2024-02-15T23:09:42Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Integrating Graphs with Large Language Models: Methods and Prospects [68.37584693537555]
Large language models (LLMs) have emerged as frontrunners, showcasing unparalleled prowess in diverse applications.
Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest.
This paper bifurcates such integrations into two predominant categories.
arXiv Detail & Related papers (2023-10-09T07:59:34Z) - Talk like a Graph: Encoding Graphs for Large Language Models [15.652881653332194]
We study the first comprehensive study of encoding graph-structured data as text for consumption by large language models (LLMs)
We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered.
arXiv Detail & Related papers (2023-10-06T19:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.