Adversarial Camouflage for Node Injection Attack on Graphs
- URL: http://arxiv.org/abs/2208.01819v4
- Date: Sat, 23 Sep 2023 07:57:47 GMT
- Title: Adversarial Camouflage for Node Injection Attack on Graphs
- Authors: Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Liang Hou, Fei Sun,
Xueqi Cheng
- Abstract summary: Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
- Score: 64.5888846198005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node injection attacks on Graph Neural Networks (GNNs) have received
increasing attention recently, due to their ability to degrade GNN performance
with high attack success rates. However, our study indicates that these attacks
often fail in practical scenarios, since defense/detection methods can easily
identify and remove the injected nodes. To address this, we devote to
camouflage node injection attack, making injected nodes appear normal and
imperceptible to defense/detection methods. Unfortunately, the non-Euclidean
structure of graph data and the lack of intuitive prior present great
challenges to the formalization, implementation, and evaluation of camouflage.
In this paper, we first propose and define camouflage as distribution
similarity between ego networks of injected nodes and normal nodes. Then for
implementation, we propose an adversarial CAmouflage framework for Node
injection Attack, namely CANA, to improve attack performance under
defense/detection methods in practical scenarios. A novel camouflage metric is
further designed under the guide of distribution similarity. Extensive
experiments demonstrate that CANA can significantly improve the attack
performance under defense/detection methods with higher camouflage or
imperceptibility. This work urges us to raise awareness of the security
vulnerabilities of GNNs in practical applications.
Related papers
- Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Node Injection for Class-specific Network Poisoning [16.177991267568125]
Graph Neural Networks (GNNs) are powerful in learning rich network representations that aid the performance of downstream tasks.
Recent studies showed that GNNs are vulnerable to adversarial attacks involving node injection and network perturbation.
We propose a novel problem statement - a class-specific poison attack on graphs in which the attacker aims to misclassify specific nodes in the target class into a different class using node injection.
arXiv Detail & Related papers (2023-01-28T19:24:03Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Single Node Injection Attack against Graph Neural Networks [39.455430635159146]
This paper focuses on an extremely limited scenario of single node injection evasion attack on Graph Neural Networks (GNNs)
Experimental results show that 100%, 98.60%, and 94.98% nodes on three public datasets are successfully attacked even when only injecting one node with one edge.
We propose a Generalizable Node Injection Attack model, namely G-NIA, to improve the attack efficiency while ensuring the attack performance.
arXiv Detail & Related papers (2021-08-30T08:12:25Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.