Adversarially Robust Neural Architecture Search for Graph Neural
Networks
- URL: http://arxiv.org/abs/2304.04168v1
- Date: Sun, 9 Apr 2023 06:00:50 GMT
- Title: Adversarially Robust Neural Architecture Search for Graph Neural
Networks
- Authors: Beini Xie, Heng Chang, Ziwei Zhang, Xin Wang, Daixin Wang, Zhiqiang
Zhang, Rex Ying, Wenwu Zhu
- Abstract summary: Graph Neural Networks (GNNs) are prone to adversarial attacks, which are massive threats to applying GNNs to risk-sensitive domains.
Existing defensive methods neither guarantee performance facing new data/tasks or adversarial attacks nor provide insights to understand GNN robustness from an architectural perspective.
We propose a novel Robust Neural Architecture search framework for GNNs (G-RNA)
We show that G-RNA significantly outperforms manually designed robust GNNs and vanilla graph NAS baselines by 12.1% to 23.4% under adversarial attacks.
- Score: 45.548352741415556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) obtain tremendous success in modeling relational
data. Still, they are prone to adversarial attacks, which are massive threats
to applying GNNs to risk-sensitive domains. Existing defensive methods neither
guarantee performance facing new data/tasks or adversarial attacks nor provide
insights to understand GNN robustness from an architectural perspective. Neural
Architecture Search (NAS) has the potential to solve this problem by automating
GNN architecture designs. Nevertheless, current graph NAS approaches lack
robust design and are vulnerable to adversarial attacks. To tackle these
challenges, we propose a novel Robust Neural Architecture search framework for
GNNs (G-RNA). Specifically, we design a robust search space for the
message-passing mechanism by adding graph structure mask operations into the
search space, which comprises various defensive operation candidates and allows
us to search for defensive GNNs. Furthermore, we define a robustness metric to
guide the search procedure, which helps to filter robust architectures. In this
way, G-RNA helps understand GNN robustness from an architectural perspective
and effectively searches for optimal adversarial robust GNNs. Extensive
experimental results on benchmark datasets show that G-RNA significantly
outperforms manually designed robust GNNs and vanilla graph NAS baselines by
12.1% to 23.4% under adversarial attacks.
Related papers
- A Simple and Yet Fairly Effective Defense for Graph Neural Networks [18.140756786259615]
Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data.
Existing defense methods against small adversarial perturbations suffer from high time complexity.
This paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture.
arXiv Detail & Related papers (2024-02-21T18:16:48Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Can Directed Graph Neural Networks be Adversarially Robust? [26.376780541893154]
This study aims to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of Graph Neural Networks (GNNs)
We introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer.
This framework outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks.
arXiv Detail & Related papers (2023-06-03T04:56:04Z) - PyGFI: Analyzing and Enhancing Robustness of Graph Neural Networks
Against Hardware Errors [3.2780036095732035]
Graph neural networks (GNNs) have emerged as a promising learning paradigm in learning graph-structured data.
This paper conducts a large-scale and empirical study of GNN resilience, aiming to understand the relationship between hardware faults and GNN accuracy.
arXiv Detail & Related papers (2022-12-07T06:14:14Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph
Neural Networks [15.448462928073635]
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data.
Recent studies show that GNNs are vulnerable to graph adversarial attacks.
We propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models.
arXiv Detail & Related papers (2022-01-30T06:32:44Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.