Spectral Adversarial Training for Robust Graph Neural Network
- URL: http://arxiv.org/abs/2211.10896v1
- Date: Sun, 20 Nov 2022 07:56:55 GMT
- Title: Spectral Adversarial Training for Robust Graph Neural Network
- Authors: Jintang Li, Jiaying Peng, Liang Chen, Zibin Zheng, Tingting Liang,
Qing Ling
- Abstract summary: Graph Neural Networks (GNNs) are vulnerable to slight but adversarially designed perturbations.
Adversarial Training (AT) is a successful approach to learning a robust model using adversarially perturbed training samples.
We propose Spectral Adversarial Training (SAT), a simple yet effective adversarial training approach for GNNs.
- Score: 36.26575133994436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies demonstrate that Graph Neural Networks (GNNs) are vulnerable
to slight but adversarially designed perturbations, known as adversarial
examples. To address this issue, robust training methods against adversarial
examples have received considerable attention in the literature.
\emph{Adversarial Training (AT)} is a successful approach to learning a robust
model using adversarially perturbed training samples. Existing AT methods on
GNNs typically construct adversarial perturbations in terms of graph structures
or node features. However, they are less effective and fraught with challenges
on graph data due to the discreteness of graph structure and the relationships
between connected examples. In this work, we seek to address these challenges
and propose Spectral Adversarial Training (SAT), a simple yet effective
adversarial training approach for GNNs. SAT first adopts a low-rank
approximation of the graph structure based on spectral decomposition, and then
constructs adversarial perturbations in the spectral domain rather than
directly manipulating the original graph structure. To investigate its
effectiveness, we employ SAT on three widely used GNNs. Experimental results on
four public graph datasets demonstrate that SAT significantly improves the
robustness of GNNs against adversarial attacks without sacrificing
classification accuracy and training efficiency.
Related papers
- Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Robust Subgraph Learning by Monitoring Early Training Representations [5.524804393257921]
Graph neural networks (GNNs) have attracted significant attention for their outstanding performance in graph learning and node classification tasks.
Their vulnerability to adversarial attacks, particularly through susceptible nodes, poses a challenge in decision-making.
We introduce the novel technique SHERD (Subgraph Learning Hale through Early Training Representation Distances) to address both performance and adversarial robustness in graph input.
arXiv Detail & Related papers (2024-03-14T22:25:37Z) - A Simple and Yet Fairly Effective Defense for Graph Neural Networks [18.140756786259615]
Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data.
Existing defense methods against small adversarial perturbations suffer from high time complexity.
This paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture.
arXiv Detail & Related papers (2024-02-21T18:16:48Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Exploring High-Order Structure for Robust Graph Structure Learning [33.62223306095631]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack, i.e., an imperceptible structure perturbation can fool GNNs to make wrong predictions.
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness.
We propose a novel algorithm that incorporates the high-order structural information into the graph structure learning.
arXiv Detail & Related papers (2022-03-22T07:03:08Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.