OOD-GNN: Out-of-Distribution Generalized Graph Neural Network
- URL: http://arxiv.org/abs/2112.03806v1
- Date: Tue, 7 Dec 2021 16:29:10 GMT
- Title: OOD-GNN: Out-of-Distribution Generalized Graph Neural Network
- Authors: Haoyang Li, Xin Wang, Ziwei Zhang, Wenwu Zhu
- Abstract summary: Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
- Score: 73.67049248445277
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph neural networks (GNNs) have achieved impressive performance when
testing and training graph data come from identical distribution. However,
existing GNNs lack out-of-distribution generalization abilities so that their
performance substantially degrades when there exist distribution shifts between
testing and training graph data. To solve this problem, in this work, we
propose an out-of-distribution generalized graph neural network (OOD-GNN) for
achieving satisfactory performance on unseen testing graphs that have different
distributions with training graphs. Our proposed OOD-GNN employs a novel
nonlinear graph representation decorrelation method utilizing random Fourier
features, which encourages the model to eliminate the statistical dependence
between relevant and irrelevant graph representations through iteratively
optimizing the sample graph weights and graph encoder. We further design a
global weight estimator to learn weights for training graphs such that
variables in graph representations are forced to be independent. The learned
weights help the graph encoder to get rid of spurious correlations and, in
turn, concentrate more on the true connection between learned discriminative
graph representations and their ground-truth labels. We conduct extensive
experiments to validate the out-of-distribution generalization abilities on two
synthetic and 12 real-world datasets with distribution shifts. The results
demonstrate that our proposed OOD-GNN significantly outperforms
state-of-the-art baselines.
Related papers
- Graph Fairness Learning under Distribution Shifts [33.9878682279549]
Graph neural networks (GNNs) have achieved remarkable performance on graph-structured data.
GNNs may inherit prejudice from the training data and make discriminatory predictions based on sensitive attributes, such as gender and race.
We propose a graph generator to produce numerous graphs with significant bias and under different distances.
arXiv Detail & Related papers (2024-01-30T06:51:24Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Addressing the Impact of Localized Training Data in Graph Neural
Networks [0.0]
Graph Neural Networks (GNNs) have achieved notable success in learning from graph-structured data.
This article aims to assess the impact of training GNNs on localized subsets of the graph.
We propose a regularization method to minimize distributional discrepancies between localized training data and graph inference.
arXiv Detail & Related papers (2023-07-24T11:04:22Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.