Unleashing the Power of Graph Learning through LLM-based Autonomous
Agents
- URL: http://arxiv.org/abs/2309.04565v1
- Date: Fri, 8 Sep 2023 19:34:29 GMT
- Title: Unleashing the Power of Graph Learning through LLM-based Autonomous
Agents
- Authors: Lanning Wei, Zhiqiang He, Huan Zhao, Quanming Yao
- Abstract summary: We propose to use Large Language Models (LLMs) as autonomous agents to simplify the learning process on diverse real-world graphs.
The proposed method is dubbed Auto$2$Graph, and the comparable performance on different datasets and learning tasks.
- Score: 38.71102849652413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph structured data are widely existed and applied in the real-world
applications, while it is a challenge to handling these diverse data and
learning tasks on graph in an efficient manner. When facing the complicated
graph learning tasks, experts have designed diverse Graph Neural Networks
(GNNs) in recent years. They have also implemented AutoML in Graph, also known
as AutoGraph, to automatically generate data-specific solutions. Despite their
success, they encounter limitations in (1) managing diverse learning tasks at
various levels, (2) dealing with different procedures in graph learning beyond
architecture design, and (3) the huge requirements on the prior knowledge when
using AutoGraph. In this paper, we propose to use Large Language Models (LLMs)
as autonomous agents to simplify the learning process on diverse real-world
graphs. Specifically, in response to a user request which may contain varying
data and learning targets at the node, edge, or graph levels, the complex graph
learning task is decomposed into three components following the agent planning,
namely, detecting the learning intent, configuring solutions based on
AutoGraph, and generating a response. The AutoGraph agents manage crucial
procedures in automated graph learning, including data-processing, AutoML
configuration, searching architectures, and hyper-parameter fine-tuning. With
these agents, those components are processed by decomposing and completing step
by step, thereby generating a solution for the given data automatically,
regardless of the learning task on node or graph. The proposed method is dubbed
Auto$^2$Graph, and the comparable performance on different datasets and
learning tasks. Its effectiveness is demonstrated by its comparable performance
on different datasets and learning tasks, as well as the human-like decisions
made by the agents.
Related papers
- Can Graph Learning Improve Task Planning? [61.47027387839096]
Task planning is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning.
Our approach complements prompt engineering and fine-tuning techniques, with performance further enhanced by improved prompts or a fine-tuned model.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - OpenGraph: Towards Open Graph Foundation Models [20.401374302429627]
We develop a general graph foundation model to understand the complex topological patterns present in diverse graph data.
We propose a unified graph tokenizer to adapt our graph model to generalize well on unseen graph data.
We also develop a scalable graph transformer, which effectively captures node-wise dependencies within the global topological context.
arXiv Detail & Related papers (2024-03-02T08:05:03Z) - UniGraph: Learning a Cross-Domain Graph Foundation Model From Natural
Language [41.722898353772656]
We present our UniGraph framework, designed to train a graph foundation model capable of generalizing to unseen graphs and tasks across diverse domains.
We propose a cascaded architecture of Language Models (LMs) and Graph Neural Networks (GNNs) as backbone networks with a self-supervised training objective based on Masked Graph Modeling (MGM)
Our comprehensive experiments across various graph learning tasks and domains demonstrate the model's effectiveness in self-supervised representation learning on unseen graphs, few-shot in-context transfer, and zero-shot transfer.
arXiv Detail & Related papers (2024-02-21T09:06:31Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Talk like a Graph: Encoding Graphs for Large Language Models [15.652881653332194]
We study the first comprehensive study of encoding graph-structured data as text for consumption by large language models (LLMs)
We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered.
arXiv Detail & Related papers (2023-10-06T19:55:21Z) - One for All: Towards Training One Graph Model for All Classification Tasks [61.656962278497225]
A unified model for various graph tasks remains underexplored, primarily due to the challenges unique to the graph learning domain.
We propose textbfOne for All (OFA), the first general framework that can use a single graph model to address the above challenges.
OFA performs well across different tasks, making it the first general-purpose across-domains classification model on graphs.
arXiv Detail & Related papers (2023-09-29T21:15:26Z) - Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via
Prompt Augmented by ChatGPT [10.879701971582502]
We aim to develop a large language model (LLM) with the reasoning ability on complex graph data.
Inspired by the latest ChatGPT and Toolformer models, we propose the Graph-ToolFormer framework to teach LLMs themselves with prompts augmented by ChatGPT to use external graph reasoning API tools.
arXiv Detail & Related papers (2023-04-10T05:25:54Z) - Automated Graph Machine Learning: Approaches, Libraries, Benchmarks and Directions [58.220137936626315]
This paper extensively discusses automated graph machine learning approaches.
We introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning.
Also, we describe a tailored benchmark that supports unified, reproducible, and efficient evaluations.
arXiv Detail & Related papers (2022-01-04T18:31:31Z) - AutoGL: A Library for Automated Graph Learning [67.63587865669372]
We present Automated Graph Learning (AutoGL), the first dedicated library for automated machine learning on graphs.
AutoGL is open-source, easy to use, and flexible to be extended.
We also present AutoGL-light, a lightweight version of AutoGL to facilitate customizing pipelines and enriching applications.
arXiv Detail & Related papers (2021-04-11T10:49:23Z) - Automated Machine Learning on Graphs: A Survey [81.21692888288658]
This paper is the first systematic and comprehensive review of automated machine learning on graphs.
We focus on hyper- parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning.
In the end, we share our insights on future research directions for automated graph machine learning.
arXiv Detail & Related papers (2021-03-01T04:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.