Can GNN be Good Adapter for LLMs?
- URL: http://arxiv.org/abs/2402.12984v1
- Date: Tue, 20 Feb 2024 13:13:13 GMT
- Title: Can GNN be Good Adapter for LLMs?
- Authors: Xuanwen Huang, Kaiqiao Han, Yang Yang, Dezheng Bao, Quanjin Tao, Ziwei
Chai, and Qi Zhu
- Abstract summary: Text-attributed graphs (TAGs) have broad applications in social media, recommendation systems, etc.
We propose GraphAdapter, which uses a graph neural network (GNN) as an efficient adapter in collaboration with large language models (LLMs) to tackle TAGs.
Through extensive experiments across multiple real-world TAGs, GraphAdapter based on Llama 2 gains an average improvement of approximately 5% in terms of node classification.
- Score: 7.18511200494162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, large language models (LLMs) have demonstrated superior
capabilities in understanding and zero-shot learning on textual data, promising
significant advances for many text-related domains. In the graph domain,
various real-world scenarios also involve textual data, where tasks and node
features can be described by text. These text-attributed graphs (TAGs) have
broad applications in social media, recommendation systems, etc. Thus, this
paper explores how to utilize LLMs to model TAGs. Previous methods for TAG
modeling are based on million-scale LMs. When scaled up to billion-scale LLMs,
they face huge challenges in computational costs. Additionally, they also
ignore the zero-shot inference capabilities of LLMs. Therefore, we propose
GraphAdapter, which uses a graph neural network (GNN) as an efficient adapter
in collaboration with LLMs to tackle TAGs. In terms of efficiency, the GNN
adapter introduces only a few trainable parameters and can be trained with low
computation costs. The entire framework is trained using auto-regression on
node text (next token prediction). Once trained, GraphAdapter can be seamlessly
fine-tuned with task-specific prompts for various downstream tasks. Through
extensive experiments across multiple real-world TAGs, GraphAdapter based on
Llama 2 gains an average improvement of approximately 5\% in terms of node
classification. Furthermore, GraphAdapter can also adapt to other language
models, including RoBERTa, GPT-2. The promising results demonstrate that GNNs
can serve as effective adapters for LLMs in TAG modeling.
Related papers
- All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - LLaGA: Large Language and Graph Assistant [73.71990472543027]
Large Language and Graph Assistant (LLaGA) is an innovative model to handle the complexities of graph-structured data.
LLaGA excels in versatility, generalizability and interpretability, allowing it to perform consistently well across different datasets and tasks.
Our experiments show that LLaGA delivers outstanding performance across four datasets and three tasks using one single model.
arXiv Detail & Related papers (2024-02-13T02:03:26Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - Exploring the Potential of Large Language Models (LLMs) in Learning on
Graphs [59.74814230246034]
Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities.
We investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors.
arXiv Detail & Related papers (2023-07-07T05:31:31Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.