GraphTool-Instruction: Revolutionizing Graph Reasoning in LLMs through Decomposed Subtask Instruction
- URL: http://arxiv.org/abs/2412.12152v1
- Date: Wed, 11 Dec 2024 06:12:52 GMT
- Title: GraphTool-Instruction: Revolutionizing Graph Reasoning in LLMs through Decomposed Subtask Instruction
- Authors: Rongzheng Wang, Shuang Liang, Qizhi Chen, Jiasheng Zhang, Ke Qin,
- Abstract summary: We propose an innovative Instruction-tuning approach that decomposes the graph reasoning task into three distinct subtasks.<n>Our GraphTool-Instruction can be used as a plug-and-play prompt for different LLMs without fine-tuning.<n>We create a graph reasoning LLM called GraphForge based on Llama3-8B.
- Score: 10.028306235665312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have been demonstrated to possess the capabilities to understand fundamental graph properties and address various graph reasoning tasks. Existing methods fine-tune LLMs to understand and execute graph reasoning tasks by specially designed task instructions. However, these Text-Instruction methods generally exhibit poor performance. Inspired by tool learning, researchers propose Tool-Instruction methods to solve various graph problems by special tool calling (e.g., function, API and model), achieving significant improvements in graph reasoning tasks. Nevertheless, current Tool-Instruction approaches focus on the tool information and ignore the graph structure information, which leads to significantly inferior performance on small-scale LLMs (less than 13B). To tackle this issue, we propose GraphTool-Instruction, an innovative Instruction-tuning approach that decomposes the graph reasoning task into three distinct subtasks (i.e., graph extraction, tool name identification and tool parameter extraction), and design specialized instructions for each subtask. Our GraphTool-Instruction can be used as a plug-and-play prompt for different LLMs without fine-tuning. Moreover, building on GraphTool-Instruction, we develop GTools, a dataset that includes twenty graph reasoning tasks, and create a graph reasoning LLM called GraphForge based on Llama3-8B. We conduct extensive experiments on twenty graph reasoning tasks with different graph types (e.g., graph size or graph direction), and we find that GraphTool-Instruction achieves SOTA compared to Text-Instruction and Tool-Instruction methods. Fine-tuned on GTools, GraphForge gets further improvement of over 30% compared to the Tool-Instruction enhanced GPT-3.5-turbo, and it performs comparably to the high-cost GPT-4o. Our codes and data are available at https://anonymous.4open.science/r/GraphTool-Instruction.
Related papers
- GraphScout: Empowering Large Language Models with Intrinsic Exploration Ability for Agentic Graph Reasoning [46.699075133532595]
Recent Graph-based Retrieval-Augmented Generation (GraphRAG) methods introduce iterative interaction between knowledge graphs and large language models (LLMs) to enhance reasoning capability.<n>We propose GraphScout, a training-centric agentic graph reasoning framework equipped with more flexible graph exploration tools.
arXiv Detail & Related papers (2026-03-02T03:25:40Z) - Semi-supervised Instruction Tuning for Large Language Models on Text-Attributed Graphs [62.544129365882014]
We propose a novel Semi-supervised Instruction Tuning pipeline for Graph Learning, named SIT-Graph.<n> SIT-Graph is model-agnostic and can be seamlessly integrated into any graph instruction tuning method that utilizes LLMs as the predictor.<n>Extensive experiments demonstrate that when incorporated into state-of-the-art graph instruction tuning methods, SIT-Graph significantly enhances their performance on text-attributed graph benchmarks.
arXiv Detail & Related papers (2026-01-19T08:10:53Z) - AutoGraph-R1: End-to-End Reinforcement Learning for Knowledge Graph Construction [60.51319139563509]
We introduce AutoGraph-R1, the first framework to directly optimize KG construction for task performance using Reinforcement Learning (RL)<n>We design two novel, task-aware reward functions, one for graphs as knowledge carriers and another as knowledge indices.<n>Our work shows it is possible to close the loop between construction and application, shifting the paradigm from building intrinsically good'' graphs to building demonstrably useful'' ones.
arXiv Detail & Related papers (2025-10-17T06:03:36Z) - What Do LLMs Need to Understand Graphs: A Survey of Parametric Representation of Graphs [69.48708136448694]
Large language models (LLMs) are reorganizing in the AI community for their expected reasoning and inference abilities.
We believe this kind of parametric representation of graphs, graph laws, can be a solution for making LLMs understand graph data as the input.
arXiv Detail & Related papers (2024-10-16T00:01:31Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - GraphStorm: all-in-one graph machine learning framework for industry applications [75.23076561638348]
GraphStorm is an end-to-end solution for scalable graph construction, graph model training and inference.
Every component in GraphStorm can operate on graphs with billions of nodes and can scale model training and inference to different hardware without changing any code.
GraphStorm has been used and deployed for over a dozen billion-scale industry applications after its release in May 2023.
arXiv Detail & Related papers (2024-06-10T04:56:16Z) - Joint Embeddings for Graph Instruction Tuning [0.0]
This work explores the integration of the graph modality in Large Language Models (LLMs) for general graph instruction following tasks.
It aims at producing a deep learning model that enhances an underlying LLM with graph embeddings and trains it to understand them.
The approach performs significantly better than a graph to text approach and remains consistent even for larger graphs.
arXiv Detail & Related papers (2024-05-31T08:26:47Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - GraphInstruct: Empowering Large Language Models with Graph Understanding and Reasoning Capability [28.713449421717193]
We evaluate and enhance the graph understanding abilities of large language models (LLMs)
In this paper, we propose a benchmark named GraphInstruct, which includes 21 classical graph reasoning tasks.
We construct GraphLM through efficient instruction-tuning, which shows prominent graph understanding capability.
arXiv Detail & Related papers (2024-03-07T13:36:08Z) - InstructGraph: Boosting Large Language Models via Graph-centric
Instruction Tuning and Preference Alignment [30.136514352238795]
InstructGraph is a framework that empowers large language models with the abilities of graph reasoning and generation.
We show that InstructGraph can achieve the best performance and outperform GPT-4 and LLaMA2 by more than 13% and 38%, respectively.
arXiv Detail & Related papers (2024-02-13T20:47:17Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Can Language Models Solve Graph Problems in Natural Language? [51.28850846990929]
Large language models (LLMs) are increasingly adopted for a variety of tasks with implicit graphical structures.
We propose NLGraph, a benchmark of graph-based problem solving simulating in natural language.
arXiv Detail & Related papers (2023-05-17T08:29:21Z) - Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via
Prompt Augmented by ChatGPT [10.879701971582502]
We aim to develop a large language model (LLM) with the reasoning ability on complex graph data.
Inspired by the latest ChatGPT and Toolformer models, we propose the Graph-ToolFormer framework to teach LLMs themselves with prompts augmented by ChatGPT to use external graph reasoning API tools.
arXiv Detail & Related papers (2023-04-10T05:25:54Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.