Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs
- URL: http://arxiv.org/abs/2310.18152v4
- Date: Sat, 9 Mar 2024 16:08:16 GMT
- Title: Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs
- Authors: Yijian Qin, Xin Wang, Ziwei Zhang, Wenwu Zhu
- Abstract summary: We present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs.
Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network (GNN) layers.
Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines.
- Score: 57.052160123387104
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Text-attributed graphs (TAGs) are prevalent on the web and research over TAGs
such as citation networks, e-commerce networks and social networks has
attracted considerable attention in the web community. Recently, large language
models (LLMs) have demonstrated exceptional capabilities across a wide range of
tasks. However, the existing works focus on harnessing the potential of LLMs
solely relying on prompts to convey graph structure information to LLMs, thus
suffering from insufficient understanding of the complex structural
relationships within TAGs. To address this problem, in this paper we present
the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the
reasoning and predicting capabilities of LLMs for TAGs. Our proposed DGTL model
incorporates graph structure information through tailored disentangled graph
neural network (GNN) layers, enabling LLMs to capture the intricate
relationships hidden in text-attributed graphs from multiple structural
factors. Furthermore, DGTL operates with frozen pre-trained LLMs, reducing
computational costs and allowing much more flexibility in combining with
different LLM models. Experimental evaluations demonstrate the effectiveness of
the proposed DGTL model on achieving superior or comparable performance over
state-of-the-art baselines. Additionally, we also demonstrate that our DGTL
model can offer natural language explanations for predictions, thereby
significantly enhancing model interpretability.
Related papers
- A Hierarchical Language Model For Interpretable Graph Reasoning [47.460255447561906]
We introduce Hierarchical Language Model for Graph (HLM-G), which employs a two-block architecture to capture node-centric local information and interaction-centric global structure.
The proposed scheme allows LLMs to address various graph queries with high efficacy, efficiency, and robustness, while reducing computational costs on large-scale graph tasks.
Comprehensive evaluations across diverse graph reasoning and real-world tasks of node, link, and graph-levels highlight the superiority of our method.
arXiv Detail & Related papers (2024-10-29T00:28:02Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness [39.57155321515097]
Large Language Models (LLMs) have demonstrated remarkable performance across various natural language processing tasks.
It remains unclear whether LLMs exhibit robustness in learning on graphs.
arXiv Detail & Related papers (2024-07-16T09:05:31Z) - LangTopo: Aligning Language Descriptions of Graphs with Tokenized Topological Modeling [10.907949155931474]
We introduce LangTopo, which aligns graph structure modeling with natural language understanding at the token level.
We demonstrate the effectiveness of our proposed method on multiple datasets.
arXiv Detail & Related papers (2024-06-19T06:20:22Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Exploring the Potential of Large Language Models in Graph Generation [51.046188600990014]
Graph generation requires large language models (LLMs) to generate graphs with given properties.
This paper explores the abilities of LLMs for graph generation with systematical task designs and experiments.
Our evaluations demonstrate that LLMs, particularly GPT-4, exhibit preliminary abilities in graph generation tasks.
arXiv Detail & Related papers (2024-03-21T12:37:54Z) - Distilling Large Language Models for Text-Attributed Graph Learning [16.447635770220334]
Text-Attributed Graphs (TAGs) are graphs of connected textual documents.
Graph models can efficiently learn TAGs, but their training heavily relies on human-annotated labels.
Large language models (LLMs) have recently demonstrated remarkable capabilities in few-shot and zero-shot TAG learning.
arXiv Detail & Related papers (2024-02-19T10:31:53Z) - Beyond Text: A Deep Dive into Large Language Models' Ability on
Understanding Graph Data [13.524529952170672]
Large language models (LLMs) have achieved impressive performance on many natural language processing tasks.
We aim to assess whether LLMs can effectively process graph data and leverage topological structures to enhance performance.
By comparing LLMs' performance with specialized graph models, we offer insights into the strengths and limitations of employing LLMs for graph analytics.
arXiv Detail & Related papers (2023-10-07T23:25:22Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.