A Simple and Yet Fairly Effective Defense for Graph Neural Networks
- URL: http://arxiv.org/abs/2402.13987v1
- Date: Wed, 21 Feb 2024 18:16:48 GMT
- Title: A Simple and Yet Fairly Effective Defense for Graph Neural Networks
- Authors: Sofiane Ennadir, Yassine Abbahaddou, Johannes F. Lutzeyer, Michalis
Vazirgiannis, Henrik Bostr\"om
- Abstract summary: Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data.
Existing defense methods against small adversarial perturbations suffer from high time complexity.
This paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture.
- Score: 18.140756786259615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as the dominant approach for
machine learning on graph-structured data. However, concerns have arisen
regarding the vulnerability of GNNs to small adversarial perturbations.
Existing defense methods against such perturbations suffer from high time
complexity and can negatively impact the model's performance on clean graphs.
To address these challenges, this paper introduces NoisyGNNs, a novel defense
method that incorporates noise into the underlying model's architecture. We
establish a theoretical connection between noise injection and the enhancement
of GNN robustness, highlighting the effectiveness of our approach. We further
conduct extensive empirical evaluations on the node classification task to
validate our theoretical findings, focusing on two popular GNNs: the GCN and
GIN. The results demonstrate that NoisyGNN achieves superior or comparable
defense performance to existing methods while minimizing added time complexity.
The NoisyGNN approach is model-agnostic, allowing it to be integrated with
different GNN architectures. Successful combinations of our NoisyGNN approach
with existing defense techniques demonstrate even further improved adversarial
defense results. Our code is publicly available at:
https://github.com/Sennadir/NoisyGNN.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Adversarially Robust Neural Architecture Search for Graph Neural
Networks [45.548352741415556]
Graph Neural Networks (GNNs) are prone to adversarial attacks, which are massive threats to applying GNNs to risk-sensitive domains.
Existing defensive methods neither guarantee performance facing new data/tasks or adversarial attacks nor provide insights to understand GNN robustness from an architectural perspective.
We propose a novel Robust Neural Architecture search framework for GNNs (G-RNA)
We show that G-RNA significantly outperforms manually designed robust GNNs and vanilla graph NAS baselines by 12.1% to 23.4% under adversarial attacks.
arXiv Detail & Related papers (2023-04-09T06:00:50Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Robust Graph Neural Networks using Weighted Graph Laplacian [1.8292714902548342]
Graph neural network (GNN) is vulnerable to noise and adversarial attacks in input data.
We propose a generic framework for robustifying GNN known as Weighted Laplacian GNN (RWL-GNN)
arXiv Detail & Related papers (2022-08-03T05:36:35Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph
Neural Networks [15.448462928073635]
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data.
Recent studies show that GNNs are vulnerable to graph adversarial attacks.
We propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models.
arXiv Detail & Related papers (2022-01-30T06:32:44Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.