Stable Prediction on Graphs with Agnostic Distribution Shift
- URL: http://arxiv.org/abs/2110.03865v1
- Date: Fri, 8 Oct 2021 02:45:47 GMT
- Title: Stable Prediction on Graphs with Agnostic Distribution Shift
- Authors: Shengyu Zhang, Kun Kuang, Jiezhong Qiu, Jin Yu, Zhou Zhao, Hongxia
Yang, Zhongfei Zhang, Fei Wu
- Abstract summary: Graph neural networks (GNNs) have been shown to be effective on various graph tasks with randomly separated training and testing data.
In real applications, however, the distribution of training graph might be different from that of the test one.
We propose a novel stable prediction framework for GNNs, which permits both locally and globally stable learning and prediction on graphs.
- Score: 105.12836224149633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph is a flexible and effective tool to represent complex structures in
practice and graph neural networks (GNNs) have been shown to be effective on
various graph tasks with randomly separated training and testing data. In real
applications, however, the distribution of training graph might be different
from that of the test one (e.g., users' interactions on the user-item training
graph and their actual preference on items, i.e., testing environment, are
known to have inconsistencies in recommender systems). Moreover, the
distribution of test data is always agnostic when GNNs are trained. Hence, we
are facing the agnostic distribution shift between training and testing on
graph learning, which would lead to unstable inference of traditional GNNs
across different test environments. To address this problem, we propose a novel
stable prediction framework for GNNs, which permits both locally and globally
stable learning and prediction on graphs. In particular, since each node is
partially represented by its neighbors in GNNs, we propose to capture the
stable properties for each node (locally stable) by re-weighting the
information propagation/aggregation processes. For global stability, we propose
a stable regularizer that reduces the training losses on heterogeneous
environments and thus warping the GNNs to generalize well. We conduct extensive
experiments on several graph benchmarks and a noisy industrial recommendation
dataset that is collected from 5 consecutive days during a product promotion
festival. The results demonstrate that our method outperforms various SOTA GNNs
for stable prediction on graphs with agnostic distribution shift, including
shift caused by node labels and attributes.
Related papers
- Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.