CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks
- URL: http://arxiv.org/abs/2110.14855v1
- Date: Thu, 28 Oct 2021 02:28:13 GMT
- Title: CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks
- Authors: Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang,
Xin Wang
- Abstract summary: Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
- Score: 59.692017490560275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent advances of graph neural networks (GNNs) in modeling graph
data, the training of GNNs on large datasets is notoriously hard due to the
overfitting. Adversarial training, which augments data with the worst-case
adversarial examples, has been widely demonstrated to improve model's
robustness against adversarial attacks and generalization ability. However,
while the previous adversarial training generally focuses on protecting GNNs
from spiteful attacks, it remains unclear how the adversarial training could
improve the generalization abilities of GNNs in the graph analytics problem. In
this paper, we investigate GNNs from the lens of weight and feature loss
landscapes, i.e., the loss changes with respect to model weights and node
features, respectively. We draw the conclusion that GNNs are prone to falling
into sharp local minima in these two loss landscapes, where GNNs possess poor
generalization performances. To tackle this problem, we construct the
co-adversarial perturbation (CAP) optimization problem in terms of weights and
features, and design the alternating adversarial perturbation algorithm to
flatten the weight and feature loss landscapes alternately. Furthermore, we
divide the training process into two stages: one conducting the standard
cross-entropy minimization to ensure the quick convergence of GNN models, the
other applying our alternating adversarial training to avoid falling into
locally sharp minima. The extensive experiments demonstrate our CAP can
generally improve the generalization performance of GNNs on a variety of
benchmark graph datasets.
Related papers
- Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks [14.89001880258583]
Graph neural networks (GNNs) have achieved tremendous success, but recent studies have shown that GNNs are vulnerable to adversarial attacks.
We investigate the adversarial robustness of GNNs by considering graph data patterns, model-specific factors, and the transferability of adversarial examples.
This work illuminates the vulnerabilities of GNNs and opens many promising avenues for designing robust GNNs.
arXiv Detail & Related papers (2024-06-20T01:24:18Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.