Can Directed Graph Neural Networks be Adversarially Robust?
- URL: http://arxiv.org/abs/2306.02002v1
- Date: Sat, 3 Jun 2023 04:56:04 GMT
- Title: Can Directed Graph Neural Networks be Adversarially Robust?
- Authors: Zhichao Hou, Xitong Zhang, Wei Wang, Charu C. Aggarwal, Xiaorui Liu
- Abstract summary: This study aims to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of Graph Neural Networks (GNNs)
We introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer.
This framework outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks.
- Score: 26.376780541893154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing research on robust Graph Neural Networks (GNNs) fails to
acknowledge the significance of directed graphs in providing rich information
about networks' inherent structure. This work presents the first investigation
into the robustness of GNNs in the context of directed graphs, aiming to
harness the profound trust implications offered by directed graphs to bolster
the robustness and resilience of GNNs. Our study reveals that existing directed
GNNs are not adversarially robust. In pursuit of our goal, we introduce a new
and realistic directed graph attack setting and propose an innovative,
universal, and efficient message-passing framework as a plug-in layer to
significantly enhance the robustness of GNNs. Combined with existing defense
strategies, this framework achieves outstanding clean accuracy and
state-of-the-art robust performance, offering superior defense against both
transfer and adaptive attacks. The findings in this study reveal a novel and
promising direction for this crucial research area. The code will be made
publicly available upon the acceptance of this work.
Related papers
- Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks [14.89001880258583]
Graph neural networks (GNNs) have achieved tremendous success, but recent studies have shown that GNNs are vulnerable to adversarial attacks.
We investigate the adversarial robustness of GNNs by considering graph data patterns, model-specific factors, and the transferability of adversarial examples.
This work illuminates the vulnerabilities of GNNs and opens many promising avenues for designing robust GNNs.
arXiv Detail & Related papers (2024-06-20T01:24:18Z) - Expressivity of Graph Neural Networks Through the Lens of Adversarial Robustness [42.129871250427016]
We use adversarial robustness as a tool to uncover a significant gap between their theoretically possible and empirically achieved expressive power.
We develop efficient adversarial attacks for subgraph counting and show that more powerful GNNs fail to generalize even to small perturbations to the graph's structure.
arXiv Detail & Related papers (2023-08-16T07:05:41Z) - Adversarially Robust Neural Architecture Search for Graph Neural
Networks [45.548352741415556]
Graph Neural Networks (GNNs) are prone to adversarial attacks, which are massive threats to applying GNNs to risk-sensitive domains.
Existing defensive methods neither guarantee performance facing new data/tasks or adversarial attacks nor provide insights to understand GNN robustness from an architectural perspective.
We propose a novel Robust Neural Architecture search framework for GNNs (G-RNA)
We show that G-RNA significantly outperforms manually designed robust GNNs and vanilla graph NAS baselines by 12.1% to 23.4% under adversarial attacks.
arXiv Detail & Related papers (2023-04-09T06:00:50Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.