Beyond Generalization: A Survey of Out-Of-Distribution Adaptation on
Graphs
- URL: http://arxiv.org/abs/2402.11153v1
- Date: Sat, 17 Feb 2024 00:40:12 GMT
- Title: Beyond Generalization: A Survey of Out-Of-Distribution Adaptation on
Graphs
- Authors: Shuhan Liu, Kaize Ding
- Abstract summary: We provide an up-to-date and forward-looking review of graph Out-Of-Distribution (OOD) adaptation methods.
Based on our proposed taxonomy for graph OOD adaptation, we systematically categorize the existing methods according to their learning paradigm.
We point out promising research directions and the corresponding challenges.
- Score: 22.561747395557642
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distribution shifts on graphs -- the data distribution discrepancies between
training and testing a graph machine learning model, are often ubiquitous and
unavoidable in real-world scenarios. Such shifts may severely deteriorate the
performance of the model, posing significant challenges for reliable graph
machine learning. Consequently, there has been a surge in research on graph
Out-Of-Distribution (OOD) adaptation methods that aim to mitigate the
distribution shifts and adapt the knowledge from one distribution to another.
In our survey, we provide an up-to-date and forward-looking review of graph OOD
adaptation methods, covering two main problem scenarios including training-time
as well as test-time graph OOD adaptation. We start by formally formulating the
two problems and then discuss different types of distribution shifts on graphs.
Based on our proposed taxonomy for graph OOD adaptation, we systematically
categorize the existing methods according to their learning paradigm and
investigate the techniques behind them. Finally, we point out promising
research directions and the corresponding challenges. We also provide a
continuously updated reading list at
https://github.com/kaize0409/Awesome-Graph-OOD-Adaptation.git
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Collaborate to Adapt: Source-Free Graph Domain Adaptation via
Bi-directional Adaptation [40.25858820407687]
Unsupervised Graph Domain Adaptation (UGDA) has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph.
We present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively.
Our proposed model outperforms recent source-free baselines by large margins.
arXiv Detail & Related papers (2024-03-03T10:23:08Z) - Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning [53.81365215811222]
We provide a review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning.
We categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning.
We discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field.
arXiv Detail & Related papers (2024-02-26T07:52:40Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and
Future Directions [64.84521350148513]
Graphs represent interconnected structures prevalent in a myriad of real-world scenarios.
Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data.
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce.
This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes.
arXiv Detail & Related papers (2023-08-26T09:11:44Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Out-Of-Distribution Generalization on Graphs: A Survey [45.16337435648981]
Graph machine learning has been extensively studied in both academia and industry.
Most of the literature is built on the I.I.D. hypothesis, i.e., testing and training graph data are independent and identically distributed.
To solve this problem, out-of-distribution (OOD) generalization on graphs has made great progress and attracted ever-increasing attention from the research community.
This paper is the first systematic and comprehensive review of OOD generalization on graphs, to the best of our knowledge.
arXiv Detail & Related papers (2022-02-16T10:59:06Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.