Out-of-Distribution Generalization in Graph Foundation Models
- URL: http://arxiv.org/abs/2601.21067v1
- Date: Wed, 28 Jan 2026 21:51:59 GMT
- Title: Out-of-Distribution Generalization in Graph Foundation Models
- Authors: Haoyang Li, Haibo Chen, Xin Wang, Wenwu Zhu,
- Abstract summary: Graph foundation models (GFMs) aim to learn general-purpose representations through large-scale pretraining across diverse graphs and tasks.<n>We first discuss the main challenges posed by distribution shifts in graph learning and outline a unified problem setting.<n>We then organize existing approaches based on whether they are designed to operate under a fixed task specification or to support generalization across heterogeneous task formulations.
- Score: 33.83630410555168
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graphs are a fundamental data structure for representing relational information in domains such as social networks, molecular systems, and knowledge graphs. However, graph learning models often suffer from limited generalization when applied beyond their training distributions. In practice, distribution shifts may arise from changes in graph structure, domain semantics, available modalities, or task formulations. To address these challenges, graph foundation models (GFMs) have recently emerged, aiming to learn general-purpose representations through large-scale pretraining across diverse graphs and tasks. In this survey, we review recent progress on GFMs from the perspective of out-of-distribution (OOD) generalization. We first discuss the main challenges posed by distribution shifts in graph learning and outline a unified problem setting. We then organize existing approaches based on whether they are designed to operate under a fixed task specification or to support generalization across heterogeneous task formulations, and summarize the corresponding OOD handling strategies and pretraining objectives. Finally, we review common evaluation protocols and discuss open directions for future research. To the best of our knowledge, this paper is the first survey for OOD generalization in GFMs.
Related papers
- Graph Foundation Models: A Comprehensive Survey [66.74249119139661]
Graph Foundation Models (GFMs) aim to bring scalable, general-purpose intelligence to structured data.<n>This survey provides a comprehensive overview of GFMs, unifying diverse efforts under a modular framework.<n>GFMs are poised to become foundational infrastructure for open-ended reasoning over structured data.
arXiv Detail & Related papers (2025-05-21T05:08:00Z) - Out-of-Distribution Detection on Graphs: A Survey [58.47395497985277]
Graph out-of-distribution (GOOD) detection focuses on identifying graph data that deviates from the distribution seen during training.<n>We categorize existing methods into four types: enhancement-based, reconstruction-based, information propagation-based, and classification-based approaches.<n>We discuss practical applications and theoretical foundations, highlighting the unique challenges posed by graph data.
arXiv Detail & Related papers (2025-02-12T04:07:12Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We propose a novel approach to cross-task generalization in graphs via task-trees.<n>We show that pretraining a graph neural network (GNN) on diverse task-trees with a reconstruction objective induces transferable knowledge.<n>This enables efficient adaptation to downstream tasks with minimal fine-tuning.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.<n>Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.<n>To provide a better understanding of the literature, we introduce a systematic taxonomy that classifies existing methods into model-centric and data-centric approaches.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model [27.047809869136458]
Graph foundation models (GFMs) have recently gained significant attention.
Current research tends to focus on specific subsets of graph learning tasks.
We propose GFMBench-a systematic and comprehensive benchmark comprising 26 datasets.
We also introduce LangGFM, a novel GFM that relies entirely on large language models.
arXiv Detail & Related papers (2024-10-19T03:27:19Z) - Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - A Survey on Self-Supervised Graph Foundation Models: Knowledge-Based Perspective [14.403179370556332]
Graph self-supervised learning (SSL) is now a go-to method for pre-training graph foundation models (GFMs)<n>We propose a knowledge-based taxonomy, which categorizes self-supervised graph models by the specific graph knowledge utilized.
arXiv Detail & Related papers (2024-03-24T13:10:09Z) - Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning [53.81365215811222]
We provide a review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning.
We categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning.
We discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field.
arXiv Detail & Related papers (2024-02-26T07:52:40Z) - Position: Graph Foundation Models are Already Here [53.737868336014735]
Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain.
We propose a novel perspective for the GFM development by advocating for a graph vocabulary''
This perspective can potentially advance the future GFM design in line with the neural scaling laws.
arXiv Detail & Related papers (2024-02-03T17:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.