GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning
- URL: http://arxiv.org/abs/2511.05592v1
- Date: Wed, 05 Nov 2025 13:07:26 GMT
- Title: GRAVER: Generative Graph Vocabularies for Robust Graph Foundation Models Fine-tuning
- Authors: Haonan Yuan, Qingyun Sun, Junhua Shi, Xingcheng Fu, Bryan Hooi, Jianxin Li, Philip S. Yu,
- Abstract summary: Graph Foundation Models (GFMs) hold promise for broad applicability across diverse graph tasks and domains.<n>Existing GFMs struggle with unstable few-shot fine-tuning.<n>We propose GRAVER, a novel Generative gRAph VocabulariEs for Robust GFM fine-tuning framework.
- Score: 92.19531718298744
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Inspired by the remarkable success of foundation models in language and vision, Graph Foundation Models (GFMs) hold significant promise for broad applicability across diverse graph tasks and domains. However, existing GFMs struggle with unstable few-shot fine-tuning, where both performance and adaptation efficiency exhibit significant fluctuations caused by the randomness in the support sample selection and structural discrepancies between the pre-trained and target graphs. How to fine-tune GFMs robustly and efficiently to enable trustworthy knowledge transfer across domains and tasks is the major challenge. In this paper, we propose GRAVER, a novel Generative gRAph VocabulariEs for Robust GFM fine-tuning framework that tackles the aforementioned instability via generative augmentations. Specifically, to identify transferable units, we analyze and extract key class-specific subgraph patterns by ego-graph disentanglement and validate their transferability both theoretically and empirically. To enable effective pre-training across diverse domains, we leverage a universal task template based on ego-graph similarity and construct graph vocabularies via graphon-based generative experts. To facilitate robust and efficient prompt fine-tuning, we grave the support samples with in-context vocabularies, where the lightweight MoE-CoE network attentively routes knowledge from source domains. Extensive experiments demonstrate the superiority of GRAVER over effectiveness, robustness, and efficiency on downstream few-shot node and graph classification tasks compared with 15 state-of-the-art baselines.
Related papers
- RAG-GFM: Overcoming In-Memory Bottlenecks in Graph Foundation Models via Retrieval-Augmented Generation [27.59455285600957]
Graph Foundation Models (GFMs) have emerged as a frontier in graph learning, which are expected to deliver transferable representations across diverse tasks.<n>We propose RAG-GFM, a Retrieval-Augmented Generation aided Graph Foundation Model that offloads knowledge from parameters.<n>We show that RAG-GFM consistently outperforms 13 state-of-the-art baselines in both cross-domain node and graph classification.
arXiv Detail & Related papers (2026-01-21T16:02:43Z) - GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning [50.40400074353263]
Graph Neural Networks (GNNs) are powerful tools for precessing relational data but often struggle to generalize to unseen graphs.<n>We introduce textbfGraph textbfIn-context textbfL textbfTransformer (GILT), a framework built on an LLM-free and tuning-free architecture.
arXiv Detail & Related papers (2025-10-06T08:09:15Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)<n>This framework provides a standardized setting to evaluate GNNs across diverse datasets.<n>We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We propose a novel approach to cross-task generalization in graphs via task-trees.<n>We show that pretraining a graph neural network (GNN) on diverse task-trees with a reconstruction objective induces transferable knowledge.<n>This enables efficient adaptation to downstream tasks with minimal fine-tuning.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - GFT: Graph Foundation Model with Transferable Tree Vocabulary [52.17804507458509]
We propose a cross-task, cross-domain graph foundation model named GFT, short for Graph Foundation model with transferable Tree vocabulary.
By treating computation trees as tokens within the transferable vocabulary, GFT improves model generalization and reduces the risk of negative transfer.
The theoretical analyses and extensive experimental studies have demonstrated the transferability of computation trees and shown the effectiveness of GFT across diverse tasks and domains in graph learning.
arXiv Detail & Related papers (2024-11-09T05:14:30Z) - Against Multifaceted Graph Heterogeneity via Asymmetric Federated Prompt Learning [5.813912301780917]
We propose a Federated Graph Prompt Learning (FedGPL) framework to efficiently enable prompt-based asymmetric graph knowledge transfer.
We conduct theoretical analyses and extensive experiments to demonstrate the significant accuracy and efficiency effectiveness of FedGPL.
arXiv Detail & Related papers (2024-11-04T11:42:25Z) - LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model [27.047809869136458]
Graph foundation models (GFMs) have recently gained significant attention.
Current research tends to focus on specific subsets of graph learning tasks.
We propose GFMBench-a systematic and comprehensive benchmark comprising 26 datasets.
We also introduce LangGFM, a novel GFM that relies entirely on large language models.
arXiv Detail & Related papers (2024-10-19T03:27:19Z) - Towards Graph Foundation Models: Training on Knowledge Graphs Enables Transferability to General Graphs [26.477872205199667]
We introduce SCR, a unified graph reasoning framework designed to train on knowledge graphs.<n>We propose semantic-conditioned message passing, a novel mechanism addressing the inherent semantic isolation in traditional KG reasoning.<n>Our results show substantial performance gains over existing foundation models.
arXiv Detail & Related papers (2024-10-16T14:26:08Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Position: Graph Foundation Models are Already Here [53.737868336014735]
Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain.
We propose a novel perspective for the GFM development by advocating for a graph vocabulary''
This perspective can potentially advance the future GFM design in line with the neural scaling laws.
arXiv Detail & Related papers (2024-02-03T17:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.