Model Generalization on Text Attribute Graphs: Principles with Large Language Models
- URL: http://arxiv.org/abs/2502.11836v1
- Date: Mon, 17 Feb 2025 14:31:00 GMT
- Title: Model Generalization on Text Attribute Graphs: Principles with Large Language Models
- Authors: Haoyu Wang, Shikun Liu, Rongzhe Wei, Pan Li,
- Abstract summary: Large language models (LLMs) have been introduced to graph learning, aiming to extend their zero-shot generalization success to tasks where labeled graph data is scarce.<n>We develop a framework for inference over text-attributed graphs (TAGs) based on task-adaptive embeddings and a generalizable graph information aggregation mechanism.<n> Evaluations on 11 real-world TAG benchmarks demonstrate that LLM-BP significantly outperforms existing approaches.
- Score: 14.657522068231138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have recently been introduced to graph learning, aiming to extend their zero-shot generalization success to tasks where labeled graph data is scarce. Among these applications, inference over text-attributed graphs (TAGs) presents unique challenges: existing methods struggle with LLMs' limited context length for processing large node neighborhoods and the misalignment between node embeddings and the LLM token space. To address these issues, we establish two key principles for ensuring generalization and derive the framework LLM-BP accordingly: (1) Unifying the attribute space with task-adaptive embeddings, where we leverage LLM-based encoders and task-aware prompting to enhance generalization of the text attribute embeddings; (2) Developing a generalizable graph information aggregation mechanism, for which we adopt belief propagation with LLM-estimated parameters that adapt across graphs. Evaluations on 11 real-world TAG benchmarks demonstrate that LLM-BP significantly outperforms existing approaches, achieving 8.10% improvement with task-conditional embeddings and an additional 1.71% gain from adaptive aggregation.
Related papers
- Rethinking Graph Structure Learning in the Era of LLMs [29.867262599990227]
Large Language and Tree Assistant (LLaTA) leverages tree-based LLM in-context learning to enhance the understanding of topology and text.
Extensive experiments on 10 datasets demonstrate that LLaTA enjoys flexibility-incorporated with any backbone.
arXiv Detail & Related papers (2025-03-27T07:28:30Z) - Data-centric Federated Graph Learning with Large Language Models [34.224475952206404]
In federated graph learning (FGL), a complete graph is divided into multiple subgraphs stored in each client due to privacy concerns.
A pain point of FGL is the heterogeneity problem, where nodes or structures present non-IID properties among clients.
We propose a general framework that innovatively decomposes the task of large language models for FGL into two sub-tasks theoretically.
arXiv Detail & Related papers (2025-03-25T08:43:08Z) - Training Large Recommendation Models via Graph-Language Token Alignment [53.3142545812349]
We propose a novel framework to train Large Recommendation models via Graph-Language Token Alignment.
By aligning item and user nodes from the interaction graph with pretrained LLM tokens, GLTA effectively leverages the reasoning abilities of LLMs.
Furthermore, we introduce Graph-Language Logits Matching (GLLM) to optimize token alignment for end-to-end item prediction.
arXiv Detail & Related papers (2025-02-26T02:19:10Z) - GraphICL: Unlocking Graph Learning Potential in LLMs through Structured Prompt Design [13.365623514253926]
Graph In-context Learning (GraphICL) Benchmark is a comprehensive benchmark comprising novel prompt templates to capture graph structure and handle limited label knowledge.<n>Our systematic evaluation shows that general-purpose LLMs equipped with our GraphICL outperform state-of-the-art specialized graph LLMs and graph neural network models.
arXiv Detail & Related papers (2025-01-27T03:50:30Z) - NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models [26.739650151993928]
Graphs are a fundamental data structure for representing relationships in real-world scenarios.
Applying Large Language Models (LLMs) to graph-related tasks poses significant challenges.
We introduce Node Tokenizer for Large Language Models (NT-LLM), a novel framework that efficiently encodes graph structures.
arXiv Detail & Related papers (2024-10-14T17:21:57Z) - Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning [28.660326096652437]
We introduce AskGNN, a novel approach that bridges the gap between sequential text processing and graph-structured data.<n>AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs.<n> Experiments across three tasks and seven LLMs demonstrate AskGNN's superior effectiveness in graph task performance.
arXiv Detail & Related papers (2024-10-09T17:19:12Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - How to Make LLMs Strong Node Classifiers? [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs)<n>We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Large Language Models as Topological Structure Enhancers for Text-Attributed Graphs [4.487720716313697]
Large language models (LLMs) have revolutionized the field of natural language processing (NLP)
This work explores how to leverage the information retrieval and text generation capabilities of LLMs to refine/enhance the topological structure of text-attributed graphs (TAGs) under the node classification setting.
arXiv Detail & Related papers (2023-11-24T07:53:48Z) - Integrating Graphs with Large Language Models: Methods and Prospects [68.37584693537555]
Large language models (LLMs) have emerged as frontrunners, showcasing unparalleled prowess in diverse applications.
Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest.
This paper bifurcates such integrations into two predominant categories.
arXiv Detail & Related papers (2023-10-09T07:59:34Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.