Node Injection for Class-specific Network Poisoning
- URL: http://arxiv.org/abs/2301.12277v2
- Date: Thu, 7 Sep 2023 19:02:08 GMT
- Title: Node Injection for Class-specific Network Poisoning
- Authors: Ansh Kumar Sharma and Rahul Kukreja and Mayank Kharbanda and Tanmoy
Chakraborty
- Abstract summary: Graph Neural Networks (GNNs) are powerful in learning rich network representations that aid the performance of downstream tasks.
Recent studies showed that GNNs are vulnerable to adversarial attacks involving node injection and network perturbation.
We propose a novel problem statement - a class-specific poison attack on graphs in which the attacker aims to misclassify specific nodes in the target class into a different class using node injection.
- Score: 16.177991267568125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are powerful in learning rich network
representations that aid the performance of downstream tasks. However, recent
studies showed that GNNs are vulnerable to adversarial attacks involving node
injection and network perturbation. Among these, node injection attacks are
more practical as they don't require manipulation in the existing network and
can be performed more realistically. In this paper, we propose a novel problem
statement - a class-specific poison attack on graphs in which the attacker aims
to misclassify specific nodes in the target class into a different class using
node injection. Additionally, nodes are injected in such a way that they
camouflage as benign nodes. We propose NICKI, a novel attacking strategy that
utilizes an optimization-based approach to sabotage the performance of
GNN-based node classifiers. NICKI works in two phases - it first learns the
node representation and then generates the features and edges of the injected
nodes. Extensive experiments and ablation studies on four benchmark networks
show that NICKI is consistently better than four baseline attacking strategies
for misclassifying nodes in the target class. We also show that the injected
nodes are properly camouflaged as benign, thus making the poisoned graph
indistinguishable from its clean version w.r.t various topological properties.
Related papers
- Node Injection Attack Based on Label Propagation Against Graph Neural Network [11.410811769066209]
Graph Neural Network (GNN) has achieved remarkable success in various graph learning tasks, such as node classification, link prediction and graph classification.
The attacker can easily perturb the aggregation process through injecting fake nodes, which reveals that GNN is vulnerable to the graph injection attack.
We propose the label-propagation-based global injection attack (LPGIA) which conducts the graph injection attack on the node classification task.
arXiv Detail & Related papers (2024-05-29T07:09:16Z) - Hard Label Black Box Node Injection Attack on Graph Neural Networks [7.176182084359572]
We will propose a non-targeted Hard Label Black Box Node Injection Attack on Graph Neural Networks.
Our attack is based on an existing edge perturbation attack, from which we restrict the optimization process to formulate a node injection attack.
In the work, we will evaluate the performance of the attack using three datasets.
arXiv Detail & Related papers (2023-11-22T09:02:04Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Single Node Injection Attack against Graph Neural Networks [39.455430635159146]
This paper focuses on an extremely limited scenario of single node injection evasion attack on Graph Neural Networks (GNNs)
Experimental results show that 100%, 98.60%, and 94.98% nodes on three public datasets are successfully attacked even when only injecting one node with one edge.
We propose a Generalizable Node Injection Attack model, namely G-NIA, to improve the attack efficiency while ensuring the attack performance.
arXiv Detail & Related papers (2021-08-30T08:12:25Z) - Graph Prototypical Networks for Few-shot Learning on Attributed Networks [72.31180045017835]
We propose a graph meta-learning framework -- Graph Prototypical Networks (GPN)
GPN is able to perform textitmeta-learning on an attributed network and derive a highly generalizable model for handling the target classification task.
arXiv Detail & Related papers (2020-06-23T04:13:23Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z) - Indirect Adversarial Attacks via Poisoning Neighbors for Graph
Convolutional Networks [0.76146285961466]
Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors.
We generate strong adversarial perturbations which are effective on not only one-hop neighbors, but more far from the target.
Our proposed method shows 99% attack success rate within two-hops from the target in two datasets.
arXiv Detail & Related papers (2020-02-19T05:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.