Single Node Injection Attack against Graph Neural Networks
- URL: http://arxiv.org/abs/2108.13049v1
- Date: Mon, 30 Aug 2021 08:12:25 GMT
- Title: Single Node Injection Attack against Graph Neural Networks
- Authors: Shuchang Tao, Qi Cao, Huawei Shen, Junjie Huang, Yunfan Wu, Xueqi
Cheng
- Abstract summary: This paper focuses on an extremely limited scenario of single node injection evasion attack on Graph Neural Networks (GNNs)
Experimental results show that 100%, 98.60%, and 94.98% nodes on three public datasets are successfully attacked even when only injecting one node with one edge.
We propose a Generalizable Node Injection Attack model, namely G-NIA, to improve the attack efficiency while ensuring the attack performance.
- Score: 39.455430635159146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node injection attack on Graph Neural Networks (GNNs) is an emerging and
practical attack scenario that the attacker injects malicious nodes rather than
modifying original nodes or edges to affect the performance of GNNs. However,
existing node injection attacks ignore extremely limited scenarios, namely the
injected nodes might be excessive such that they may be perceptible to the
target GNN. In this paper, we focus on an extremely limited scenario of single
node injection evasion attack, i.e., the attacker is only allowed to inject one
single node during the test phase to hurt GNN's performance. The discreteness
of network structure and the coupling effect between network structure and node
features bring great challenges to this extremely limited scenario. We first
propose an optimization-based method to explore the performance upper bound of
single node injection evasion attack. Experimental results show that 100%,
98.60%, and 94.98% nodes on three public datasets are successfully attacked
even when only injecting one node with one edge, confirming the feasibility of
single node injection evasion attack. However, such an optimization-based
method needs to be re-optimized for each attack, which is computationally
unbearable. To solve the dilemma, we further propose a Generalizable Node
Injection Attack model, namely G-NIA, to improve the attack efficiency while
ensuring the attack performance. Experiments are conducted across three
well-known GNNs. Our proposed G-NIA significantly outperforms state-of-the-art
baselines and is 500 times faster than the optimization-based method when
inferring.
Related papers
- Minimum Topology Attacks for Graph Neural Networks [70.17791814425148]
Graph Neural Networks (GNNs) have received significant attention for their robustness to adversarial topology attacks.
We propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
arXiv Detail & Related papers (2024-03-05T07:29:12Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Node Injection for Class-specific Network Poisoning [16.177991267568125]
Graph Neural Networks (GNNs) are powerful in learning rich network representations that aid the performance of downstream tasks.
Recent studies showed that GNNs are vulnerable to adversarial attacks involving node injection and network perturbation.
We propose a novel problem statement - a class-specific poison attack on graphs in which the attacker aims to misclassify specific nodes in the target class into a different class using node injection.
arXiv Detail & Related papers (2023-01-28T19:24:03Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - TDGIA:Effective Injection Attacks on Graph Neural Networks [21.254710171416374]
We study a recently-introduced realistic attack scenario on graphs -- graph injection attack (GIA)
In the GIA scenario, the adversary is not able to modify the existing link structure and node attributes of the input graph, instead the attack is performed by injecting adversarial nodes into it.
We present an analysis on the topological vulnerability of GNNs under GIA setting, based on which we propose the Topological Defective Graph Injection Attack (TDGIA) for effective injection attacks.
arXiv Detail & Related papers (2021-06-12T01:53:25Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.