GraphTranslator: Aligning Graph Model to Large Language Model for
Open-ended Tasks
- URL: http://arxiv.org/abs/2402.07197v4
- Date: Wed, 28 Feb 2024 02:42:35 GMT
- Title: GraphTranslator: Aligning Graph Model to Large Language Model for
Open-ended Tasks
- Authors: Mengmei Zhang, Mingwei Sun, Peng Wang, Shen Fan, Yanhu Mo, Xiaoxiao
Xu, Hong Liu, Cheng Yang, Chuan Shi
- Abstract summary: Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and instruction-following capabilities.
GraphTranslator aims to leverage GM to handle the pre-defined tasks effectively.
By translating node representation into tokens, GraphTranslator empowers an LLM to make predictions based on language instructions.
- Score: 44.02825843494608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and
instruction-following capabilities, have catalyzed a revolutionary
transformation across diverse fields, especially for open-ended tasks. While
the idea is less explored in the graph domain, despite the availability of
numerous powerful graph models (GMs), they are restricted to tasks in a
pre-defined form. Although several methods applying LLMs to graphs have been
proposed, they fail to simultaneously handle the pre-defined and open-ended
tasks, with LLM as a node feature enhancer or as a standalone predictor. To
break this dilemma, we propose to bridge the pretrained GM and LLM by a
Translator, named GraphTranslator, aiming to leverage GM to handle the
pre-defined tasks effectively and utilize the extended interface of LLMs to
offer various open-ended tasks for GM. To train such Translator, we propose a
Producer capable of constructing the graph-text alignment data along node
information, neighbor information and model information. By translating node
representation into tokens, GraphTranslator empowers an LLM to make predictions
based on language instructions, providing a unified perspective for both
pre-defined and open-ended tasks. Extensive results demonstrate the
effectiveness of our proposed GraphTranslator on zero-shot node classification.
The graph question answering experiments reveal our GraphTranslator potential
across a broad spectrum of open-ended tasks through language instructions. Our
code is available at: https://github.com/alibaba/GraphTranslator.
Related papers
- Enhance Graph Alignment for Large Language Models [33.96082485852042]
Graph-to-token approaches are popular in enabling Large Language Models to process graph information.
Existing methods have a misalignment between self-supervised tasks and supervised downstream tasks.
We propose Graph Alignment Large Language Models (GALLM) to benefit from aligned task templates.
arXiv Detail & Related papers (2024-10-15T07:50:34Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - MuseGraph: Graph-oriented Instruction Tuning of Large Language Models
for Generic Graph Mining [41.19687587548107]
Graph Neural Networks (GNNs) need to be re-trained every time when applied to different graph tasks and datasets.
We propose a novel framework MuseGraph, which seamlessly integrates the strengths of GNNs and Large Language Models (LLMs)
Our experimental results demonstrate significant improvements in different graph tasks.
arXiv Detail & Related papers (2024-03-02T09:27:32Z) - LLaGA: Large Language and Graph Assistant [73.71990472543027]
Large Language and Graph Assistant (LLaGA) is an innovative model to handle the complexities of graph-structured data.
LLaGA excels in versatility, generalizability and interpretability, allowing it to perform consistently well across different datasets and tasks.
Our experiments show that LLaGA delivers outstanding performance across four datasets and three tasks using one single model.
arXiv Detail & Related papers (2024-02-13T02:03:26Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Exploring the Potential of Large Language Models (LLMs) in Learning on
Graphs [59.74814230246034]
Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities.
We investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors.
arXiv Detail & Related papers (2023-07-07T05:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.