Adversarial Immunization for Certifiable Robustness on Graphs
- URL: http://arxiv.org/abs/2007.09647v5
- Date: Wed, 25 Aug 2021 08:39:40 GMT
- Title: Adversarial Immunization for Certifiable Robustness on Graphs
- Authors: Shuchang Tao, Huawei Shen, Qi Cao, Liang Hou, Xueqi Cheng
- Abstract summary: Graph neural networks (GNNs) are vulnerable to adversarial attacks, similar to other deep learning models.
We propose and formulate the graph adversarial immunization problem, i.e., vaccinating an affordable fraction of node pairs, connected or unconnected, to improve robustness of graph against any admissible adversarial attack.
- Score: 47.957807368630995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite achieving strong performance in semi-supervised node classification
task, graph neural networks (GNNs) are vulnerable to adversarial attacks,
similar to other deep learning models. Existing researches focus on developing
either robust GNN models or attack detection methods against adversarial
attacks on graphs. However, little research attention is paid to the potential
and practice of immunization to adversarial attacks on graphs. In this paper,
we propose and formulate the graph adversarial immunization problem, i.e.,
vaccinating an affordable fraction of node pairs, connected or unconnected, to
improve the certifiable robustness of graph against any admissible adversarial
attack. We further propose an effective algorithm, called AdvImmune, which
optimizes with meta-gradient in a discrete way to circumvent the
computationally expensive combinatorial optimization when solving the
adversarial immunization problem. Experiments are conducted on two citation
networks and one social network. Experimental results demonstrate that the
proposed AdvImmune method remarkably improves the ratio of robust nodes by 12%,
42%, 65%, with an affordable immune budget of only 5% edges.
Related papers
- Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Robustness-Inspired Defense Against Backdoor Attacks on Graph Neural Networks [30.82433380830665]
Graph Neural Networks (GNNs) have achieved promising results in tasks such as node classification and graph classification.
Recent studies reveal that GNNs are vulnerable to backdoor attacks, posing a significant threat to their real-world adoption.
We propose using random edge dropping to detect backdoors and theoretically show that it can efficiently distinguish poisoned nodes from clean ones.
arXiv Detail & Related papers (2024-06-14T08:46:26Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Graph Adversarial Immunization for Certifiable Robustness [63.58739705845775]
Graph neural networks (GNNs) are vulnerable to adversarial attacks.
Existing defenses focus on developing adversarial training or model modification.
We propose and formulate graph adversarial immunization, i.e., vaccinating part of graph structure.
arXiv Detail & Related papers (2023-02-16T03:18:43Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.