Learning to Reweight for Graph Neural Network
- URL: http://arxiv.org/abs/2312.12475v1
- Date: Tue, 19 Dec 2023 12:25:10 GMT
- Title: Learning to Reweight for Graph Neural Network
- Authors: Zhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang,
Chengqiang Lu, Hongxia Yang and Fei Wu
- Abstract summary: Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
- Score: 63.978102332612906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) show promising results for graph tasks. However,
existing GNNs' generalization ability will degrade when there exist
distribution shifts between testing and training graph data. The cardinal
impetus underlying the severe degeneration is that the GNNs are architected
predicated upon the I.I.D assumptions. In such a setting, GNNs are inclined to
leverage imperceptible statistical correlations subsisting in the training set
to predict, albeit it is a spurious correlation. In this paper, we study the
problem of the generalization ability of GNNs in Out-Of-Distribution (OOD)
settings. To solve this problem, we propose the Learning to Reweight for
Generalizable Graph Neural Network (L2R-GNN) to enhance the generalization
ability for achieving satisfactory performance on unseen testing graphs that
have different distributions with training graphs. We propose a novel nonlinear
graph decorrelation method, which can substantially improve the
out-of-distribution generalization ability and compares favorably to previous
methods in restraining the over-reduced sample size. The variables of the graph
representation are clustered based on the stability of the correlation, and the
graph decorrelation method learns weights to remove correlations between the
variables of different clusters rather than any two variables. Besides, we
interpose an efficacious stochastic algorithm upon bi-level optimization for
the L2R-GNN framework, which facilitates simultaneously learning the optimal
weights and GNN parameters, and avoids the overfitting problem. Experimental
results show that L2R-GNN greatly outperforms baselines on various graph
prediction benchmarks under distribution shifts.
Related papers
- Faster Inference Time for GNNs using coarsening [1.323700980948722]
coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation.
No previous research has tackled the cost during the inference.
This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.