Dynamic Bundling with Large Language Models for Zero-Shot Inference on Text-Attributed Graphs
- URL: http://arxiv.org/abs/2505.17599v2
- Date: Thu, 02 Oct 2025 06:04:27 GMT
- Title: Dynamic Bundling with Large Language Models for Zero-Shot Inference on Text-Attributed Graphs
- Authors: Yusheng Zhao, Qixin Zhang, Xiao Luo, Weizhi Zhang, Zhiping Xiao, Wei Ju, Philip S. Yu, Ming Zhang,
- Abstract summary: Large language models (LLMs) have been used in many zero-shot learning problems.<n>LLMs struggle with text attributes isolated from the graph topology.<n>They yield unreliable predictions due to both information insufficiency and the inherent weakness of LLMs.
- Score: 56.73437142790496
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large language models (LLMs) have been used in many zero-shot learning problems, with their strong generalization ability. Recently, adopting LLMs in text-attributed graphs (TAGs) has drawn increasing attention. However, the adoption of LLMs faces two major challenges: limited information on graph structure and unreliable responses. LLMs struggle with text attributes isolated from the graph topology. Worse still, they yield unreliable predictions due to both information insufficiency and the inherent weakness of LLMs (e.g., hallucination). Towards this end, this paper proposes a novel method named Dynamic Text Bundling Supervision (DENSE) that queries LLMs with bundles of texts to obtain bundle-level labels and uses these labels to supervise graph neural networks. Specifically, we sample a set of bundles, each containing a set of nodes with corresponding texts of close proximity. We then query LLMs with the bundled texts to obtain the label of each bundle. Subsequently, the bundle labels are used to supervise the optimization of graph neural networks, and the bundles are further refined to exclude noisy items. To justify our design, we also provide theoretical analysis of the proposed method. Extensive experiments across ten datasets validate the effectiveness of the proposed method.
Related papers
- Toward Graph-Tokenizing Large Language Models with Reconstructive Graph Instruction Tuning [17.712367049197212]
Key challenge is to align graph data with language spaces so that large language models (LLMs) can better comprehend graphs.<n>GTokenLLMs encode complex structures and lengthy texts into a graph token sequence, and then align them with text tokens via language instructions tuning.<n>Despite their initial success, our information-theoretic analysis reveals that existing GTokenLLMs rely solely on text supervision from language instructions.
arXiv Detail & Related papers (2026-03-02T02:26:54Z) - Semi-supervised Instruction Tuning for Large Language Models on Text-Attributed Graphs [62.544129365882014]
We propose a novel Semi-supervised Instruction Tuning pipeline for Graph Learning, named SIT-Graph.<n> SIT-Graph is model-agnostic and can be seamlessly integrated into any graph instruction tuning method that utilizes LLMs as the predictor.<n>Extensive experiments demonstrate that when incorporated into state-of-the-art graph instruction tuning methods, SIT-Graph significantly enhances their performance on text-attributed graph benchmarks.
arXiv Detail & Related papers (2026-01-19T08:10:53Z) - Refining Interactions: Enhancing Anisotropy in Graph Neural Networks with Language Semantics [6.273224130511677]
We introduce LanSAGNN (Language Semantic Anisotropic Graph Neural Network), a framework that extends the concept of anisotropic GNNs to the natural language level.<n>We propose an efficient dual-layer LLMs finetuning architecture to better align LLMs' outputs with graph tasks.
arXiv Detail & Related papers (2025-04-02T07:32:45Z) - Few-Shot Graph Out-of-Distribution Detection with LLMs [34.42512005781724]
We propose a framework that combines the strengths of large language models (LLMs) and graph neural networks (GNNs) to enhance data efficiency in graph out-of-distribution (OOD) detection.<n>We show that LLM-GOOD significantly reduces human annotation costs and outperforms state-of-the-art baselines in terms of both ID classification accuracy and OOD detection performance.
arXiv Detail & Related papers (2025-03-28T02:37:18Z) - LLM as GNN: Graph Vocabulary Learning for Text-Attributed Graph Foundation Models [54.82915844507371]
Text-Attributed Graphs (TAGs) are ubiquitous in real-world scenarios.<n>Despite large efforts to integrate Large Language Models (LLMs) and Graph Neural Networks (GNNs) for TAGs, existing approaches suffer from decoupled architectures.<n>We propose PromptGFM, a versatile GFM for TAGs grounded in graph vocabulary learning.
arXiv Detail & Related papers (2025-03-05T09:45:22Z) - Large Language Model-based Augmentation for Imbalanced Node Classification on Text-Attributed Graphs [13.42259312243504]
Node classification on graphs often suffers from class imbalance, leading to biased predictions and significant risks in real-world applications.<n>We propose Large Language Model-based Augmentation on Text-Attributed Graphs (LA-TAG) to handle imbalanced node classification.
arXiv Detail & Related papers (2024-10-22T10:36:15Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - Distilling Large Language Models for Text-Attributed Graph Learning [16.447635770220334]
Text-Attributed Graphs (TAGs) are graphs of connected textual documents.
Graph models can efficiently learn TAGs, but their training heavily relies on human-annotated labels.
Large language models (LLMs) have recently demonstrated remarkable capabilities in few-shot and zero-shot TAG learning.
arXiv Detail & Related papers (2024-02-19T10:31:53Z) - Large Language Models as Topological Structure Enhancers for Text-Attributed Graphs [3.5627549694751184]
Large language models (LLMs) have revolutionized the field of natural language processing (NLP)<n>This work explores how to leverage the information retrieval and text generation capabilities of LLMs to refine/enhance the topological structure of text-attributed graphs (TAGs) under the node classification setting.
arXiv Detail & Related papers (2023-11-24T07:53:48Z) - Leveraging Large Language Models for Node Generation in Few-Shot Learning on Text-Attributed Graphs [5.587264586806575]
We propose a plug-and-play approach to empower text-attributed graphs through node generation using Large Language Models (LLMs)<n>LLMs extract semantic information from labels and generate samples that belong to categories as exemplars.<n>We employ an edge predictor to capture structural information inherent in the raw dataset and integrate the newly generated samples into the original graph.
arXiv Detail & Related papers (2023-10-15T16:04:28Z) - Exploring the Potential of Large Language Models (LLMs) in Learning on
Graphs [59.74814230246034]
Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities.
We investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors.
arXiv Detail & Related papers (2023-07-07T05:31:31Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.