Graph Adversarial Immunization for Certifiable Robustness
- URL: http://arxiv.org/abs/2302.08051v2
- Date: Sat, 23 Sep 2023 08:10:32 GMT
- Title: Graph Adversarial Immunization for Certifiable Robustness
- Authors: Shuchang Tao, Huawei Shen, Qi Cao, Yunfan Wu, Liang Hou, Xueqi Cheng
- Abstract summary: Graph neural networks (GNNs) are vulnerable to adversarial attacks.
Existing defenses focus on developing adversarial training or model modification.
We propose and formulate graph adversarial immunization, i.e., vaccinating part of graph structure.
- Score: 63.58739705845775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite achieving great success, graph neural networks (GNNs) are vulnerable
to adversarial attacks. Existing defenses focus on developing adversarial
training or model modification. In this paper, we propose and formulate graph
adversarial immunization, i.e., vaccinating part of graph structure to improve
certifiable robustness of graph against any admissible adversarial attack. We
first propose edge-level immunization to vaccinate node pairs. Unfortunately,
such edge-level immunization cannot defend against emerging node injection
attacks, since it only immunizes existing node pairs. To this end, we further
propose node-level immunization. To avoid computationally intensive
combinatorial optimization associated with adversarial immunization, we develop
AdvImmune-Edge and AdvImmune-Node algorithms to effectively obtain the immune
node pairs or nodes. Extensive experiments demonstrate the superiority of
AdvImmune methods. In particular, AdvImmune-Node remarkably improves the ratio
of robust nodes by 79%, 294%, and 100%, after immunizing only 5% of nodes.
Furthermore, AdvImmune methods show excellent defensive performance against
various attacks, outperforming state-of-the-art defenses. To the best of our
knowledge, this is the first attempt to improve certifiable robustness from
graph data perspective without losing performance on clean graphs, providing
new insights into graph adversarial learning.
Related papers
- Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - Simple and Efficient Partial Graph Adversarial Attack: A New Perspective [16.083311332179633]
Existing global attack methods treat all nodes in the graph as their attack targets.
We propose a totally new method named partial graph attack (PGA), which selects the vulnerable nodes as attack targets.
PGA can achieve significant improvements in both attack effect and attack efficiency compared to other existing graph global attack methods.
arXiv Detail & Related papers (2023-08-15T15:23:36Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Adversarial Immunization for Certifiable Robustness on Graphs [47.957807368630995]
Graph neural networks (GNNs) are vulnerable to adversarial attacks, similar to other deep learning models.
We propose and formulate the graph adversarial immunization problem, i.e., vaccinating an affordable fraction of node pairs, connected or unconnected, to improve robustness of graph against any admissible adversarial attack.
arXiv Detail & Related papers (2020-07-19T10:41:10Z) - Indirect Adversarial Attacks via Poisoning Neighbors for Graph
Convolutional Networks [0.76146285961466]
Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors.
We generate strong adversarial perturbations which are effective on not only one-hop neighbors, but more far from the target.
Our proposed method shows 99% attack success rate within two-hops from the target in two datasets.
arXiv Detail & Related papers (2020-02-19T05:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.