Understanding and Improving Graph Injection Attack by Promoting
Unnoticeability
- URL: http://arxiv.org/abs/2202.08057v1
- Date: Wed, 16 Feb 2022 13:41:39 GMT
- Title: Understanding and Improving Graph Injection Attack by Promoting
Unnoticeability
- Authors: Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo
Han, James Cheng
- Abstract summary: Graph Injection Attack (GIA) is a practical attack scenario on Graph Neural Networks (GNNs)
We compare GIA with Graph Modification Attack (GMA) and find that GIA can be provably more harmful than GMA due to its relatively high flexibility.
We introduce a novel constraint -- homophily unnoticeability that enforces GIA to preserve the homophily, and propose Harmonious Adversarial Objective (HAO) to instantiate it.
- Score: 69.3530705476563
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recently Graph Injection Attack (GIA) emerges as a practical attack scenario
on Graph Neural Networks (GNNs), where the adversary can merely inject few
malicious nodes instead of modifying existing nodes or edges, i.e., Graph
Modification Attack (GMA). Although GIA has achieved promising results, little
is known about why it is successful and whether there is any pitfall behind the
success. To understand the power of GIA, we compare it with GMA and find that
GIA can be provably more harmful than GMA due to its relatively high
flexibility. However, the high flexibility will also lead to great damage to
the homophily distribution of the original graph, i.e., similarity among
neighbors. Consequently, the threats of GIA can be easily alleviated or even
prevented by homophily-based defenses designed to recover the original
homophily. To mitigate the issue, we introduce a novel constraint -- homophily
unnoticeability that enforces GIA to preserve the homophily, and propose
Harmonious Adversarial Objective (HAO) to instantiate it. Extensive experiments
verify that GIA with HAO can break homophily-based defenses and outperform
previous GIA attacks by a significant margin. We believe our methods can serve
for a more reliable evaluation of the robustness of GNNs.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting [58.91947205027892]
Federated learning has exhibited vulnerabilities to Byzantine attacks.
Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model.
A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks.
arXiv Detail & Related papers (2023-02-13T03:31:50Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - TDGIA:Effective Injection Attacks on Graph Neural Networks [21.254710171416374]
We study a recently-introduced realistic attack scenario on graphs -- graph injection attack (GIA)
In the GIA scenario, the adversary is not able to modify the existing link structure and node attributes of the input graph, instead the attack is performed by injecting adversarial nodes into it.
We present an analysis on the topological vulnerability of GNNs under GIA setting, based on which we propose the Topological Defective Graph Injection Attack (TDGIA) for effective injection attacks.
arXiv Detail & Related papers (2021-06-12T01:53:25Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.