FRGNN: Mitigating the Impact of Distribution Shift on Graph Neural
Networks via Test-Time Feature Reconstruction
- URL: http://arxiv.org/abs/2308.09259v2
- Date: Fri, 13 Oct 2023 09:41:45 GMT
- Title: FRGNN: Mitigating the Impact of Distribution Shift on Graph Neural
Networks via Test-Time Feature Reconstruction
- Authors: Rui Ding, Jielong Yang, Feng Ji, Xionghu Zhong, Linbo Xie
- Abstract summary: A distribution shift can adversely affect the test performance of Graph Neural Networks (GNNs)
We propose FR-GNN, a general framework for GNNs to conduct feature reconstruction.
Notably, the reconstructed node features can be directly utilized for testing the well-trained model.
- Score: 13.21683198528012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to inappropriate sample selection and limited training data, a
distribution shift often exists between the training and test sets. This shift
can adversely affect the test performance of Graph Neural Networks (GNNs).
Existing approaches mitigate this issue by either enhancing the robustness of
GNNs to distribution shift or reducing the shift itself. However, both
approaches necessitate retraining the model, which becomes unfeasible when the
model structure and parameters are inaccessible. To address this challenge, we
propose FR-GNN, a general framework for GNNs to conduct feature reconstruction.
FRGNN constructs a mapping relationship between the output and input of a
well-trained GNN to obtain class representative embeddings and then uses these
embeddings to reconstruct the features of labeled nodes. These reconstructed
features are then incorporated into the message passing mechanism of GNNs to
influence the predictions of unlabeled nodes at test time. Notably, the
reconstructed node features can be directly utilized for testing the
well-trained model, effectively reducing the distribution shift and leading to
improved test performance. This remarkable achievement is attained without any
modifications to the model structure or parameters. We provide theoretical
guarantees for the effectiveness of our framework. Furthermore, we conduct
comprehensive experiments on various public datasets. The experimental results
demonstrate the superior performance of FRGNN in comparison to multiple
categories of baseline methods.
Related papers
- Control the GNN: Utilizing Neural Controller with Lyapunov Stability for Test-Time Feature Reconstruction [15.066912209426542]
The performance of graph neural networks (GNNs) is susceptible to discrepancies between training and testing sample distributions.
We propose a novel node feature reconstruction method grounded in Lyapunov stability theory.
We validate the effectiveness of our approach through extensive experiments across multiple datasets, demonstrating significant performance improvements.
arXiv Detail & Related papers (2024-10-13T03:34:19Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - Robust Node Representation Learning via Graph Variational Diffusion
Networks [7.335425547621226]
In recent years, compelling evidence has revealed that GNN-based node representation learning can be substantially deteriorated by perturbations in a graph structure.
To learn robust node representation in the presence of perturbations, various works have been proposed to safeguard GNNs.
We propose the Graph Variational Diffusion Network (GVDN), a new node encoder that effectively manipulates Gaussian noise to safeguard robustness on perturbed graphs.
arXiv Detail & Related papers (2023-12-18T03:18:53Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.