Adaptive Tokenization: On the Hop-Overpriority Problem in Tokenized Graph Learning Models
- URL: http://arxiv.org/abs/2505.15845v1
- Date: Mon, 19 May 2025 06:25:33 GMT
- Title: Adaptive Tokenization: On the Hop-Overpriority Problem in Tokenized Graph Learning Models
- Authors: Zhibiao Wang, Yunlong Zhou, Ziwei Zhang, Mengmei Zhang, Shirui Pan, Chunming Hu, Xiao Wang,
- Abstract summary: Tokenized Graph Learning Models (TGLMs) convert graphs into ordered token lists for scalable processing.<n>TGLMs rely on hand-designed token lists and their adaptability to diverse graph learning scenarios remains unexplored.<n>We propose Learnable Graph Token List (LGTL), a plug-and-play module to replace hand-designed token lists in TGLMs.
- Score: 59.98982735197465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Transformers, leveraging the global attention to capture long-range dependencies in graph structures, have significantly advanced graph machine learning, but face prohibitive computational complexity. Tokenized Graph Learning Models (TGLMs) address this issue by converting graphs into ordered token lists for scalable processing. Besides, TGLMs also empower Large Language Models (LLMs) to handle text-attributed graphs more effectively and thus are also employed in Graph LLMs. However, existing TGLMs rely on hand-designed token lists and their adaptability to diverse graph learning scenarios remains unexplored. In this paper, we first conduct extensive empirical and theoretical preliminary studies for hand-designed token lists. Surprisingly, we identify an unexplored hop-overpriority problem: the common pre-defined token lists overemphasize nearby nodes and overwhelm the ability of TGLMs to balance local and global signals. This phenomenon is especially harmful for heterophilic graphs. To address this problem, we propose the Learnable Graph Token List (LGTL), a plug-and-play module to replace hand-designed token lists in TGLMs. Specifically, LGTL adaptively adjusts the weights across hops and prioritizes informative nodes within hops through a graph attention gate module and a selection module, respectively. In this way, contextually informative nodes can be adaptively emphasized for both homophilic and heterophilic graphs. Besides, we theoretically show that LGTL can address the hop-overpriority problem. Extensive experiments on benchmarks validate the efficacy of LGTL across both Graph Transformers and Graph LLM backbones.
Related papers
- G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning [58.73279333365234]
Reinforcement Learning (RL) on synthetic graph-theoretic tasks can significantly scale graph reasoning abilities.<n>With RL on Erdos, G1 obtains substantial improvements in graph reasoning, where our finetuned 3B model even outperforms Qwen2.5-72B-Instruct (24x size)<n>Our findings offer an efficient, scalable path for building strong graph reasoners by finetuning LLMs with RL on graph-theoretic tasks.
arXiv Detail & Related papers (2025-05-24T04:33:41Z) - Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation [75.9865035064794]
Large language models (LLMs) have demonstrated remarkable capabilities, but still struggle with issues like hallucinations and outdated information.<n>Retrieval-augmented generation (RAG) addresses these issues by grounding LLM outputs in external knowledge with an Information Retrieval (IR) system.<n>We propose Align-GRAG, a novel reasoning-guided dual alignment framework in post-retrieval phrase.
arXiv Detail & Related papers (2025-05-22T05:15:27Z) - Scalability Matters: Overcoming Challenges in InstructGLM with Similarity-Degree-Based Sampling [1.2805157669888096]
We propose SDM-InstructGLM, a novel instruction-tuned Graph Language Model (InstructGLM) framework that enhances scalability and efficiency without relying on GNNs.<n>Our method introduces a similarity-degree-based biased random walk mechanism, which selectively samples and encodes graph information based on node-feature similarity and degree centrality.<n>Our results demonstrate the feasibility of LLM-only graph processing, enabling scalable and interpretable Graph Language Models (GLMs) optimized through instruction-based fine-tuning.
arXiv Detail & Related papers (2025-05-02T06:08:21Z) - LLM as GNN: Graph Vocabulary Learning for Text-Attributed Graph Foundation Models [54.82915844507371]
Text-Attributed Graphs (TAGs) are ubiquitous in real-world scenarios.<n>Despite large efforts to integrate Large Language Models (LLMs) and Graph Neural Networks (GNNs) for TAGs, existing approaches suffer from decoupled architectures.<n>We propose PromptGFM, a versatile GFM for TAGs grounded in graph vocabulary learning.
arXiv Detail & Related papers (2025-03-05T09:45:22Z) - Query-Aware Learnable Graph Pooling Tokens as Prompt for Large Language Models [3.9489815622117566]
Learnable Graph Pooling Token (LGPT) enables flexible and efficient graph representation.<n>Our method achieves a 4.13% performance improvement on the GraphQA benchmark without training the large language model.
arXiv Detail & Related papers (2025-01-29T10:35:41Z) - Enhance Graph Alignment for Large Language Models [33.96082485852042]
Graph-to-token approaches are popular in enabling Large Language Models to process graph information.
Existing methods have a misalignment between self-supervised tasks and supervised downstream tasks.
We propose Graph Alignment Large Language Models (GALLM) to benefit from aligned task templates.
arXiv Detail & Related papers (2024-10-15T07:50:34Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [88.4320775961431]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.<n>Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.<n>We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - LLaGA: Large Language and Graph Assistant [73.71990472543027]
Large Language and Graph Assistant (LLaGA) is an innovative model to handle the complexities of graph-structured data.
LLaGA excels in versatility, generalizability and interpretability, allowing it to perform consistently well across different datasets and tasks.
Our experiments show that LLaGA delivers outstanding performance across four datasets and three tasks using one single model.
arXiv Detail & Related papers (2024-02-13T02:03:26Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.